00:00:00.000 Started by upstream project "autotest-per-patch" build number 132603 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.243 Using shallow fetch with depth 1 00:00:00.243 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.243 > git --version # timeout=10 00:00:00.282 > git --version # 'git version 2.39.2' 00:00:00.282 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.305 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.305 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.873 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.887 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.899 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.899 > git config core.sparsecheckout # timeout=10 00:00:05.915 > git read-tree -mu HEAD # timeout=10 00:00:05.933 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.959 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.960 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.076 [Pipeline] Start of Pipeline 00:00:06.087 [Pipeline] library 00:00:06.089 Loading library shm_lib@master 00:00:06.089 Library shm_lib@master is cached. Copying from home. 00:00:06.105 [Pipeline] node 00:00:06.115 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.117 [Pipeline] { 00:00:06.126 [Pipeline] catchError 00:00:06.128 [Pipeline] { 00:00:06.140 [Pipeline] wrap 00:00:06.148 [Pipeline] { 00:00:06.157 [Pipeline] stage 00:00:06.159 [Pipeline] { (Prologue) 00:00:06.180 [Pipeline] echo 00:00:06.182 Node: VM-host-SM17 00:00:06.188 [Pipeline] cleanWs 00:00:06.196 [WS-CLEANUP] Deleting project workspace... 00:00:06.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.202 [WS-CLEANUP] done 00:00:06.384 [Pipeline] setCustomBuildProperty 00:00:06.452 [Pipeline] httpRequest 00:00:06.730 [Pipeline] echo 00:00:06.732 Sorcerer 10.211.164.20 is alive 00:00:06.743 [Pipeline] retry 00:00:06.745 [Pipeline] { 00:00:06.759 [Pipeline] httpRequest 00:00:06.764 HttpMethod: GET 00:00:06.764 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.765 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.773 Response Code: HTTP/1.1 200 OK 00:00:06.774 Success: Status code 200 is in the accepted range: 200,404 00:00:06.775 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.445 [Pipeline] } 00:00:12.464 [Pipeline] // retry 00:00:12.530 [Pipeline] sh 00:00:12.811 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.825 [Pipeline] httpRequest 00:00:13.213 [Pipeline] echo 00:00:13.215 Sorcerer 10.211.164.20 is alive 00:00:13.225 [Pipeline] retry 00:00:13.228 [Pipeline] { 00:00:13.241 [Pipeline] httpRequest 00:00:13.245 HttpMethod: GET 00:00:13.246 URL: http://10.211.164.20/packages/spdk_89b293437e25bb6291835abc136fb4471857aa03.tar.gz 00:00:13.246 Sending request to url: http://10.211.164.20/packages/spdk_89b293437e25bb6291835abc136fb4471857aa03.tar.gz 00:00:13.252 Response Code: HTTP/1.1 200 OK 00:00:13.253 Success: Status code 200 is in the accepted range: 200,404 00:00:13.253 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_89b293437e25bb6291835abc136fb4471857aa03.tar.gz 00:01:32.246 [Pipeline] } 00:01:32.262 [Pipeline] // retry 00:01:32.269 [Pipeline] sh 00:01:32.549 + tar --no-same-owner -xf spdk_89b293437e25bb6291835abc136fb4471857aa03.tar.gz 00:01:35.848 [Pipeline] sh 00:01:36.124 + git -C spdk log --oneline -n5 00:01:36.124 89b293437 thread: use extended version of fd group add 00:01:36.124 56ed70f67 event: use extended version of fd group add 00:01:36.124 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:36.124 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:36.124 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:36.139 [Pipeline] writeFile 00:01:36.151 [Pipeline] sh 00:01:36.427 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.436 [Pipeline] sh 00:01:36.712 + cat autorun-spdk.conf 00:01:36.712 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.712 SPDK_TEST_NVMF=1 00:01:36.712 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.712 SPDK_TEST_URING=1 00:01:36.712 SPDK_TEST_USDT=1 00:01:36.712 SPDK_RUN_UBSAN=1 00:01:36.712 NET_TYPE=virt 00:01:36.712 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.717 RUN_NIGHTLY=0 00:01:36.719 [Pipeline] } 00:01:36.736 [Pipeline] // stage 00:01:36.750 [Pipeline] stage 00:01:36.751 [Pipeline] { (Run VM) 00:01:36.767 [Pipeline] sh 00:01:37.052 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.052 + echo 'Start stage prepare_nvme.sh' 00:01:37.052 Start stage prepare_nvme.sh 00:01:37.052 + [[ -n 3 ]] 00:01:37.052 + disk_prefix=ex3 00:01:37.052 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:37.052 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:37.052 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:37.052 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.052 ++ SPDK_TEST_NVMF=1 00:01:37.052 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.052 ++ SPDK_TEST_URING=1 00:01:37.052 ++ SPDK_TEST_USDT=1 00:01:37.052 ++ SPDK_RUN_UBSAN=1 00:01:37.052 ++ NET_TYPE=virt 00:01:37.052 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.052 ++ RUN_NIGHTLY=0 00:01:37.052 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.052 + nvme_files=() 00:01:37.052 + declare -A nvme_files 00:01:37.052 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.052 + nvme_files['nvme.img']=5G 00:01:37.052 + nvme_files['nvme-cmb.img']=5G 00:01:37.052 + nvme_files['nvme-multi0.img']=4G 00:01:37.052 + nvme_files['nvme-multi1.img']=4G 00:01:37.052 + nvme_files['nvme-multi2.img']=4G 00:01:37.052 + nvme_files['nvme-openstack.img']=8G 00:01:37.052 + nvme_files['nvme-zns.img']=5G 00:01:37.052 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.052 + (( SPDK_TEST_FTL == 1 )) 00:01:37.052 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.052 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:37.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.052 + for nvme in "${!nvme_files[@]}" 00:01:37.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:37.619 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.619 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:37.619 + echo 'End stage prepare_nvme.sh' 00:01:37.619 End stage prepare_nvme.sh 00:01:37.634 [Pipeline] sh 00:01:37.916 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.916 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:37.916 00:01:37.916 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:37.916 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:37.916 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.916 HELP=0 00:01:37.916 DRY_RUN=0 00:01:37.916 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:37.916 NVME_DISKS_TYPE=nvme,nvme, 00:01:37.916 NVME_AUTO_CREATE=0 00:01:37.916 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:37.916 NVME_CMB=,, 00:01:37.916 NVME_PMR=,, 00:01:37.916 NVME_ZNS=,, 00:01:37.916 NVME_MS=,, 00:01:37.916 NVME_FDP=,, 00:01:37.916 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.916 SPDK_VAGRANT_VMCPU=10 00:01:37.916 SPDK_VAGRANT_VMRAM=12288 00:01:37.916 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.916 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.916 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.916 SPDK_OPENSTACK_NETWORK=0 00:01:37.916 VAGRANT_PACKAGE_BOX=0 00:01:37.916 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.916 FORCE_DISTRO=true 00:01:37.916 VAGRANT_BOX_VERSION= 00:01:37.916 EXTRA_VAGRANTFILES= 00:01:37.916 NIC_MODEL=e1000 00:01:37.916 00:01:37.916 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:37.916 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:41.205 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.464 ==> default: Creating image (snapshot of base box volume). 00:01:41.723 ==> default: Creating domain with the following settings... 00:01:41.723 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732884492_7535e17abc37bf4e6f25 00:01:41.723 ==> default: -- Domain type: kvm 00:01:41.723 ==> default: -- Cpus: 10 00:01:41.723 ==> default: -- Feature: acpi 00:01:41.723 ==> default: -- Feature: apic 00:01:41.723 ==> default: -- Feature: pae 00:01:41.723 ==> default: -- Memory: 12288M 00:01:41.723 ==> default: -- Memory Backing: hugepages: 00:01:41.723 ==> default: -- Management MAC: 00:01:41.723 ==> default: -- Loader: 00:01:41.723 ==> default: -- Nvram: 00:01:41.723 ==> default: -- Base box: spdk/fedora39 00:01:41.723 ==> default: -- Storage pool: default 00:01:41.723 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732884492_7535e17abc37bf4e6f25.img (20G) 00:01:41.723 ==> default: -- Volume Cache: default 00:01:41.723 ==> default: -- Kernel: 00:01:41.723 ==> default: -- Initrd: 00:01:41.723 ==> default: -- Graphics Type: vnc 00:01:41.723 ==> default: -- Graphics Port: -1 00:01:41.723 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.723 ==> default: -- Graphics Password: Not defined 00:01:41.723 ==> default: -- Video Type: cirrus 00:01:41.723 ==> default: -- Video VRAM: 9216 00:01:41.723 ==> default: -- Sound Type: 00:01:41.723 ==> default: -- Keymap: en-us 00:01:41.723 ==> default: -- TPM Path: 00:01:41.723 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.723 ==> default: -- Command line args: 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:41.723 ==> default: -> value=-drive, 00:01:41.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:41.723 ==> default: -> value=-drive, 00:01:41.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.723 ==> default: -> value=-drive, 00:01:41.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.723 ==> default: -> value=-drive, 00:01:41.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:41.723 ==> default: -> value=-device, 00:01:41.723 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.723 ==> default: Creating shared folders metadata... 00:01:41.983 ==> default: Starting domain. 00:01:43.361 ==> default: Waiting for domain to get an IP address... 00:02:01.450 ==> default: Waiting for SSH to become available... 00:02:02.827 ==> default: Configuring and enabling network interfaces... 00:02:07.017 default: SSH address: 192.168.121.63:22 00:02:07.017 default: SSH username: vagrant 00:02:07.017 default: SSH auth method: private key 00:02:09.554 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.669 ==> default: Mounting SSHFS shared folder... 00:02:18.606 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.606 ==> default: Checking Mount.. 00:02:19.984 ==> default: Folder Successfully Mounted! 00:02:19.984 ==> default: Running provisioner: file... 00:02:20.551 default: ~/.gitconfig => .gitconfig 00:02:21.119 00:02:21.119 SUCCESS! 00:02:21.119 00:02:21.119 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:21.119 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.119 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:21.119 00:02:21.129 [Pipeline] } 00:02:21.149 [Pipeline] // stage 00:02:21.160 [Pipeline] dir 00:02:21.160 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:21.162 [Pipeline] { 00:02:21.177 [Pipeline] catchError 00:02:21.180 [Pipeline] { 00:02:21.197 [Pipeline] sh 00:02:21.479 + vagrant ssh-config --host vagrant 00:02:21.479 + sed -ne /^Host/,$p 00:02:21.479 + tee ssh_conf 00:02:24.810 Host vagrant 00:02:24.810 HostName 192.168.121.63 00:02:24.810 User vagrant 00:02:24.810 Port 22 00:02:24.810 UserKnownHostsFile /dev/null 00:02:24.810 StrictHostKeyChecking no 00:02:24.810 PasswordAuthentication no 00:02:24.810 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:24.810 IdentitiesOnly yes 00:02:24.810 LogLevel FATAL 00:02:24.810 ForwardAgent yes 00:02:24.810 ForwardX11 yes 00:02:24.810 00:02:24.825 [Pipeline] withEnv 00:02:24.827 [Pipeline] { 00:02:24.840 [Pipeline] sh 00:02:25.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.120 source /etc/os-release 00:02:25.120 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.120 # Minimal, systemd-like check. 00:02:25.120 if [[ -e /.dockerenv ]]; then 00:02:25.120 # Clear garbage from the node's name: 00:02:25.120 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.120 # $HOSTNAME is the actual container id 00:02:25.120 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.120 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.120 # We can assume this is a mount from a host where container is running, 00:02:25.120 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.120 container="$(< /etc/hostname) ($agent)" 00:02:25.120 else 00:02:25.120 # Fallback 00:02:25.120 container=$agent 00:02:25.120 fi 00:02:25.120 fi 00:02:25.120 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.120 00:02:25.392 [Pipeline] } 00:02:25.408 [Pipeline] // withEnv 00:02:25.417 [Pipeline] setCustomBuildProperty 00:02:25.434 [Pipeline] stage 00:02:25.438 [Pipeline] { (Tests) 00:02:25.458 [Pipeline] sh 00:02:25.738 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.012 [Pipeline] sh 00:02:26.314 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.391 [Pipeline] timeout 00:02:26.392 Timeout set to expire in 1 hr 0 min 00:02:26.394 [Pipeline] { 00:02:26.410 [Pipeline] sh 00:02:26.690 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.256 HEAD is now at 89b293437 thread: use extended version of fd group add 00:02:27.268 [Pipeline] sh 00:02:27.549 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.823 [Pipeline] sh 00:02:28.104 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.380 [Pipeline] sh 00:02:28.662 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:28.922 ++ readlink -f spdk_repo 00:02:28.922 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.922 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.922 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.922 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.922 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.922 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.922 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.922 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:28.922 + cd /home/vagrant/spdk_repo 00:02:28.922 + source /etc/os-release 00:02:28.922 ++ NAME='Fedora Linux' 00:02:28.922 ++ VERSION='39 (Cloud Edition)' 00:02:28.922 ++ ID=fedora 00:02:28.922 ++ VERSION_ID=39 00:02:28.922 ++ VERSION_CODENAME= 00:02:28.922 ++ PLATFORM_ID=platform:f39 00:02:28.922 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:28.922 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.922 ++ LOGO=fedora-logo-icon 00:02:28.922 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:28.922 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.922 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:28.922 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.922 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.922 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.922 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:28.922 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.922 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:28.922 ++ SUPPORT_END=2024-11-12 00:02:28.922 ++ VARIANT='Cloud Edition' 00:02:28.922 ++ VARIANT_ID=cloud 00:02:28.922 + uname -a 00:02:28.922 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:28.922 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:29.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:29.490 Hugepages 00:02:29.490 node hugesize free / total 00:02:29.490 node0 1048576kB 0 / 0 00:02:29.490 node0 2048kB 0 / 0 00:02:29.490 00:02:29.490 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.490 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:29.490 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:29.490 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:29.490 + rm -f /tmp/spdk-ld-path 00:02:29.490 + source autorun-spdk.conf 00:02:29.490 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.490 ++ SPDK_TEST_NVMF=1 00:02:29.490 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:29.490 ++ SPDK_TEST_URING=1 00:02:29.490 ++ SPDK_TEST_USDT=1 00:02:29.490 ++ SPDK_RUN_UBSAN=1 00:02:29.490 ++ NET_TYPE=virt 00:02:29.490 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.490 ++ RUN_NIGHTLY=0 00:02:29.490 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.490 + [[ -n '' ]] 00:02:29.490 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.490 + for M in /var/spdk/build-*-manifest.txt 00:02:29.490 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:29.490 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.490 + for M in /var/spdk/build-*-manifest.txt 00:02:29.490 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.490 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.490 + for M in /var/spdk/build-*-manifest.txt 00:02:29.490 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.490 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.490 ++ uname 00:02:29.490 + [[ Linux == \L\i\n\u\x ]] 00:02:29.490 + sudo dmesg -T 00:02:29.490 + sudo dmesg --clear 00:02:29.490 + dmesg_pid=5204 00:02:29.490 + sudo dmesg -Tw 00:02:29.490 + [[ Fedora Linux == FreeBSD ]] 00:02:29.490 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.490 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.490 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.490 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.490 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.490 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.490 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.490 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.490 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.490 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.490 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.490 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.490 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.490 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.490 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 12:49:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:29.749 12:49:01 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.749 12:49:01 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:29.749 12:49:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:29.749 12:49:01 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 12:49:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:29.749 12:49:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.749 12:49:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:29.749 12:49:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.749 12:49:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.749 12:49:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.749 12:49:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.749 12:49:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.749 12:49:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.749 12:49:01 -- paths/export.sh@5 -- $ export PATH 00:02:29.749 12:49:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.749 12:49:01 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.749 12:49:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:29.749 12:49:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732884541.XXXXXX 00:02:29.749 12:49:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732884541.syxvLy 00:02:29.749 12:49:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:29.750 12:49:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:29.750 12:49:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:29.750 12:49:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.750 12:49:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.750 12:49:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:29.750 12:49:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:29.750 12:49:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.750 12:49:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:29.750 12:49:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:29.750 12:49:01 -- pm/common@17 -- $ local monitor 00:02:29.750 12:49:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.750 12:49:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.750 12:49:01 -- pm/common@25 -- $ sleep 1 00:02:29.750 12:49:01 -- pm/common@21 -- $ date +%s 00:02:29.750 12:49:01 -- pm/common@21 -- $ date +%s 00:02:29.750 12:49:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732884541 00:02:29.750 12:49:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732884541 00:02:29.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732884541_collect-cpu-load.pm.log 00:02:29.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732884541_collect-vmstat.pm.log 00:02:30.686 12:49:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:30.686 12:49:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.686 12:49:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.686 12:49:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.686 12:49:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.686 Fri Nov 29 12:49:02 PM UTC 2024 00:02:30.686 12:49:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.686 v25.01-pre-278-g89b293437 00:02:30.686 12:49:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:30.686 12:49:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.686 12:49:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.686 12:49:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.686 12:49:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.686 12:49:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.686 ************************************ 00:02:30.686 START TEST ubsan 00:02:30.686 ************************************ 00:02:30.686 using ubsan 00:02:30.686 12:49:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:30.686 00:02:30.686 real 0m0.000s 00:02:30.686 user 0m0.000s 00:02:30.686 sys 0m0.000s 00:02:30.686 12:49:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.686 ************************************ 00:02:30.686 12:49:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.686 END TEST ubsan 00:02:30.686 ************************************ 00:02:30.686 12:49:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:30.686 12:49:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.686 12:49:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:30.686 12:49:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:30.945 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:30.945 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:31.514 Using 'verbs' RDMA provider 00:02:47.324 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:59.527 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:59.527 Creating mk/config.mk...done. 00:02:59.527 Creating mk/cc.flags.mk...done. 00:02:59.527 Type 'make' to build. 00:02:59.527 12:49:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:59.527 12:49:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:59.527 12:49:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:59.527 12:49:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.527 ************************************ 00:02:59.527 START TEST make 00:02:59.527 ************************************ 00:02:59.527 12:49:30 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:59.527 make[1]: Nothing to be done for 'all'. 00:03:11.727 The Meson build system 00:03:11.727 Version: 1.5.0 00:03:11.727 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:11.727 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:11.727 Build type: native build 00:03:11.727 Program cat found: YES (/usr/bin/cat) 00:03:11.727 Project name: DPDK 00:03:11.727 Project version: 24.03.0 00:03:11.727 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:11.727 C linker for the host machine: cc ld.bfd 2.40-14 00:03:11.727 Host machine cpu family: x86_64 00:03:11.727 Host machine cpu: x86_64 00:03:11.727 Message: ## Building in Developer Mode ## 00:03:11.727 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.727 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.727 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.727 Program python3 found: YES (/usr/bin/python3) 00:03:11.727 Program cat found: YES (/usr/bin/cat) 00:03:11.727 Compiler for C supports arguments -march=native: YES 00:03:11.727 Checking for size of "void *" : 8 00:03:11.727 Checking for size of "void *" : 8 (cached) 00:03:11.727 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:11.727 Library m found: YES 00:03:11.727 Library numa found: YES 00:03:11.727 Has header "numaif.h" : YES 00:03:11.727 Library fdt found: NO 00:03:11.727 Library execinfo found: NO 00:03:11.727 Has header "execinfo.h" : YES 00:03:11.727 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:11.727 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.727 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.727 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.727 Run-time dependency openssl found: YES 3.1.1 00:03:11.727 Run-time dependency libpcap found: YES 1.10.4 00:03:11.727 Has header "pcap.h" with dependency libpcap: YES 00:03:11.727 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.727 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.727 Compiler for C supports arguments -Wformat: YES 00:03:11.727 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.727 Compiler for C supports arguments -Wformat-security: NO 00:03:11.727 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.727 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.727 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.727 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.727 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.727 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.727 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.727 Compiler for C supports arguments -Wundef: YES 00:03:11.727 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.727 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.727 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.727 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.727 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.727 Program objdump found: YES (/usr/bin/objdump) 00:03:11.727 Compiler for C supports arguments -mavx512f: YES 00:03:11.727 Checking if "AVX512 checking" compiles: YES 00:03:11.727 Fetching value of define "__SSE4_2__" : 1 00:03:11.727 Fetching value of define "__AES__" : 1 00:03:11.727 Fetching value of define "__AVX__" : 1 00:03:11.727 Fetching value of define "__AVX2__" : 1 00:03:11.727 Fetching value of define "__AVX512BW__" : (undefined) 00:03:11.727 Fetching value of define "__AVX512CD__" : (undefined) 00:03:11.727 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:11.727 Fetching value of define "__AVX512F__" : (undefined) 00:03:11.727 Fetching value of define "__AVX512VL__" : (undefined) 00:03:11.727 Fetching value of define "__PCLMUL__" : 1 00:03:11.727 Fetching value of define "__RDRND__" : 1 00:03:11.727 Fetching value of define "__RDSEED__" : 1 00:03:11.727 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.727 Fetching value of define "__znver1__" : (undefined) 00:03:11.727 Fetching value of define "__znver2__" : (undefined) 00:03:11.727 Fetching value of define "__znver3__" : (undefined) 00:03:11.727 Fetching value of define "__znver4__" : (undefined) 00:03:11.727 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.727 Message: lib/log: Defining dependency "log" 00:03:11.727 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.727 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.727 Checking for function "getentropy" : NO 00:03:11.727 Message: lib/eal: Defining dependency "eal" 00:03:11.727 Message: lib/ring: Defining dependency "ring" 00:03:11.727 Message: lib/rcu: Defining dependency "rcu" 00:03:11.727 Message: lib/mempool: Defining dependency "mempool" 00:03:11.727 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.727 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.727 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.727 Compiler for C supports arguments -mpclmul: YES 00:03:11.727 Compiler for C supports arguments -maes: YES 00:03:11.727 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.727 Compiler for C supports arguments -mavx512bw: YES 00:03:11.727 Compiler for C supports arguments -mavx512dq: YES 00:03:11.727 Compiler for C supports arguments -mavx512vl: YES 00:03:11.727 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.727 Compiler for C supports arguments -mavx2: YES 00:03:11.727 Compiler for C supports arguments -mavx: YES 00:03:11.727 Message: lib/net: Defining dependency "net" 00:03:11.727 Message: lib/meter: Defining dependency "meter" 00:03:11.727 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.727 Message: lib/pci: Defining dependency "pci" 00:03:11.727 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.727 Message: lib/hash: Defining dependency "hash" 00:03:11.727 Message: lib/timer: Defining dependency "timer" 00:03:11.727 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.728 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.728 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.728 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.728 Message: lib/power: Defining dependency "power" 00:03:11.728 Message: lib/reorder: Defining dependency "reorder" 00:03:11.728 Message: lib/security: Defining dependency "security" 00:03:11.728 Has header "linux/userfaultfd.h" : YES 00:03:11.728 Has header "linux/vduse.h" : YES 00:03:11.728 Message: lib/vhost: Defining dependency "vhost" 00:03:11.728 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.728 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.728 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.728 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.728 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.728 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.728 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.728 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.728 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.728 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.728 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:11.728 Configuring doxy-api-html.conf using configuration 00:03:11.728 Configuring doxy-api-man.conf using configuration 00:03:11.728 Program mandb found: YES (/usr/bin/mandb) 00:03:11.728 Program sphinx-build found: NO 00:03:11.728 Configuring rte_build_config.h using configuration 00:03:11.728 Message: 00:03:11.728 ================= 00:03:11.728 Applications Enabled 00:03:11.728 ================= 00:03:11.728 00:03:11.728 apps: 00:03:11.728 00:03:11.728 00:03:11.728 Message: 00:03:11.728 ================= 00:03:11.728 Libraries Enabled 00:03:11.728 ================= 00:03:11.728 00:03:11.728 libs: 00:03:11.728 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.728 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.728 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.728 00:03:11.728 Message: 00:03:11.728 =============== 00:03:11.728 Drivers Enabled 00:03:11.728 =============== 00:03:11.728 00:03:11.728 common: 00:03:11.728 00:03:11.728 bus: 00:03:11.728 pci, vdev, 00:03:11.728 mempool: 00:03:11.728 ring, 00:03:11.728 dma: 00:03:11.728 00:03:11.728 net: 00:03:11.728 00:03:11.728 crypto: 00:03:11.728 00:03:11.728 compress: 00:03:11.728 00:03:11.728 vdpa: 00:03:11.728 00:03:11.728 00:03:11.728 Message: 00:03:11.728 ================= 00:03:11.728 Content Skipped 00:03:11.728 ================= 00:03:11.728 00:03:11.728 apps: 00:03:11.728 dumpcap: explicitly disabled via build config 00:03:11.728 graph: explicitly disabled via build config 00:03:11.728 pdump: explicitly disabled via build config 00:03:11.728 proc-info: explicitly disabled via build config 00:03:11.728 test-acl: explicitly disabled via build config 00:03:11.728 test-bbdev: explicitly disabled via build config 00:03:11.728 test-cmdline: explicitly disabled via build config 00:03:11.728 test-compress-perf: explicitly disabled via build config 00:03:11.728 test-crypto-perf: explicitly disabled via build config 00:03:11.728 test-dma-perf: explicitly disabled via build config 00:03:11.728 test-eventdev: explicitly disabled via build config 00:03:11.728 test-fib: explicitly disabled via build config 00:03:11.728 test-flow-perf: explicitly disabled via build config 00:03:11.728 test-gpudev: explicitly disabled via build config 00:03:11.728 test-mldev: explicitly disabled via build config 00:03:11.728 test-pipeline: explicitly disabled via build config 00:03:11.728 test-pmd: explicitly disabled via build config 00:03:11.728 test-regex: explicitly disabled via build config 00:03:11.728 test-sad: explicitly disabled via build config 00:03:11.728 test-security-perf: explicitly disabled via build config 00:03:11.728 00:03:11.728 libs: 00:03:11.728 argparse: explicitly disabled via build config 00:03:11.728 metrics: explicitly disabled via build config 00:03:11.728 acl: explicitly disabled via build config 00:03:11.728 bbdev: explicitly disabled via build config 00:03:11.728 bitratestats: explicitly disabled via build config 00:03:11.728 bpf: explicitly disabled via build config 00:03:11.728 cfgfile: explicitly disabled via build config 00:03:11.728 distributor: explicitly disabled via build config 00:03:11.728 efd: explicitly disabled via build config 00:03:11.728 eventdev: explicitly disabled via build config 00:03:11.728 dispatcher: explicitly disabled via build config 00:03:11.728 gpudev: explicitly disabled via build config 00:03:11.728 gro: explicitly disabled via build config 00:03:11.728 gso: explicitly disabled via build config 00:03:11.728 ip_frag: explicitly disabled via build config 00:03:11.728 jobstats: explicitly disabled via build config 00:03:11.728 latencystats: explicitly disabled via build config 00:03:11.728 lpm: explicitly disabled via build config 00:03:11.728 member: explicitly disabled via build config 00:03:11.728 pcapng: explicitly disabled via build config 00:03:11.728 rawdev: explicitly disabled via build config 00:03:11.728 regexdev: explicitly disabled via build config 00:03:11.728 mldev: explicitly disabled via build config 00:03:11.728 rib: explicitly disabled via build config 00:03:11.728 sched: explicitly disabled via build config 00:03:11.728 stack: explicitly disabled via build config 00:03:11.728 ipsec: explicitly disabled via build config 00:03:11.728 pdcp: explicitly disabled via build config 00:03:11.728 fib: explicitly disabled via build config 00:03:11.728 port: explicitly disabled via build config 00:03:11.728 pdump: explicitly disabled via build config 00:03:11.728 table: explicitly disabled via build config 00:03:11.728 pipeline: explicitly disabled via build config 00:03:11.728 graph: explicitly disabled via build config 00:03:11.728 node: explicitly disabled via build config 00:03:11.728 00:03:11.728 drivers: 00:03:11.728 common/cpt: not in enabled drivers build config 00:03:11.728 common/dpaax: not in enabled drivers build config 00:03:11.728 common/iavf: not in enabled drivers build config 00:03:11.728 common/idpf: not in enabled drivers build config 00:03:11.728 common/ionic: not in enabled drivers build config 00:03:11.728 common/mvep: not in enabled drivers build config 00:03:11.728 common/octeontx: not in enabled drivers build config 00:03:11.728 bus/auxiliary: not in enabled drivers build config 00:03:11.728 bus/cdx: not in enabled drivers build config 00:03:11.728 bus/dpaa: not in enabled drivers build config 00:03:11.728 bus/fslmc: not in enabled drivers build config 00:03:11.728 bus/ifpga: not in enabled drivers build config 00:03:11.728 bus/platform: not in enabled drivers build config 00:03:11.728 bus/uacce: not in enabled drivers build config 00:03:11.728 bus/vmbus: not in enabled drivers build config 00:03:11.728 common/cnxk: not in enabled drivers build config 00:03:11.728 common/mlx5: not in enabled drivers build config 00:03:11.728 common/nfp: not in enabled drivers build config 00:03:11.728 common/nitrox: not in enabled drivers build config 00:03:11.728 common/qat: not in enabled drivers build config 00:03:11.728 common/sfc_efx: not in enabled drivers build config 00:03:11.728 mempool/bucket: not in enabled drivers build config 00:03:11.728 mempool/cnxk: not in enabled drivers build config 00:03:11.728 mempool/dpaa: not in enabled drivers build config 00:03:11.728 mempool/dpaa2: not in enabled drivers build config 00:03:11.728 mempool/octeontx: not in enabled drivers build config 00:03:11.728 mempool/stack: not in enabled drivers build config 00:03:11.728 dma/cnxk: not in enabled drivers build config 00:03:11.728 dma/dpaa: not in enabled drivers build config 00:03:11.728 dma/dpaa2: not in enabled drivers build config 00:03:11.728 dma/hisilicon: not in enabled drivers build config 00:03:11.728 dma/idxd: not in enabled drivers build config 00:03:11.728 dma/ioat: not in enabled drivers build config 00:03:11.728 dma/skeleton: not in enabled drivers build config 00:03:11.728 net/af_packet: not in enabled drivers build config 00:03:11.728 net/af_xdp: not in enabled drivers build config 00:03:11.728 net/ark: not in enabled drivers build config 00:03:11.728 net/atlantic: not in enabled drivers build config 00:03:11.728 net/avp: not in enabled drivers build config 00:03:11.728 net/axgbe: not in enabled drivers build config 00:03:11.728 net/bnx2x: not in enabled drivers build config 00:03:11.728 net/bnxt: not in enabled drivers build config 00:03:11.728 net/bonding: not in enabled drivers build config 00:03:11.728 net/cnxk: not in enabled drivers build config 00:03:11.729 net/cpfl: not in enabled drivers build config 00:03:11.729 net/cxgbe: not in enabled drivers build config 00:03:11.729 net/dpaa: not in enabled drivers build config 00:03:11.729 net/dpaa2: not in enabled drivers build config 00:03:11.729 net/e1000: not in enabled drivers build config 00:03:11.729 net/ena: not in enabled drivers build config 00:03:11.729 net/enetc: not in enabled drivers build config 00:03:11.729 net/enetfec: not in enabled drivers build config 00:03:11.729 net/enic: not in enabled drivers build config 00:03:11.729 net/failsafe: not in enabled drivers build config 00:03:11.729 net/fm10k: not in enabled drivers build config 00:03:11.729 net/gve: not in enabled drivers build config 00:03:11.729 net/hinic: not in enabled drivers build config 00:03:11.729 net/hns3: not in enabled drivers build config 00:03:11.729 net/i40e: not in enabled drivers build config 00:03:11.729 net/iavf: not in enabled drivers build config 00:03:11.729 net/ice: not in enabled drivers build config 00:03:11.729 net/idpf: not in enabled drivers build config 00:03:11.729 net/igc: not in enabled drivers build config 00:03:11.729 net/ionic: not in enabled drivers build config 00:03:11.729 net/ipn3ke: not in enabled drivers build config 00:03:11.729 net/ixgbe: not in enabled drivers build config 00:03:11.729 net/mana: not in enabled drivers build config 00:03:11.729 net/memif: not in enabled drivers build config 00:03:11.729 net/mlx4: not in enabled drivers build config 00:03:11.729 net/mlx5: not in enabled drivers build config 00:03:11.729 net/mvneta: not in enabled drivers build config 00:03:11.729 net/mvpp2: not in enabled drivers build config 00:03:11.729 net/netvsc: not in enabled drivers build config 00:03:11.729 net/nfb: not in enabled drivers build config 00:03:11.729 net/nfp: not in enabled drivers build config 00:03:11.729 net/ngbe: not in enabled drivers build config 00:03:11.729 net/null: not in enabled drivers build config 00:03:11.729 net/octeontx: not in enabled drivers build config 00:03:11.729 net/octeon_ep: not in enabled drivers build config 00:03:11.729 net/pcap: not in enabled drivers build config 00:03:11.729 net/pfe: not in enabled drivers build config 00:03:11.729 net/qede: not in enabled drivers build config 00:03:11.729 net/ring: not in enabled drivers build config 00:03:11.729 net/sfc: not in enabled drivers build config 00:03:11.729 net/softnic: not in enabled drivers build config 00:03:11.729 net/tap: not in enabled drivers build config 00:03:11.729 net/thunderx: not in enabled drivers build config 00:03:11.729 net/txgbe: not in enabled drivers build config 00:03:11.729 net/vdev_netvsc: not in enabled drivers build config 00:03:11.729 net/vhost: not in enabled drivers build config 00:03:11.729 net/virtio: not in enabled drivers build config 00:03:11.729 net/vmxnet3: not in enabled drivers build config 00:03:11.729 raw/*: missing internal dependency, "rawdev" 00:03:11.729 crypto/armv8: not in enabled drivers build config 00:03:11.729 crypto/bcmfs: not in enabled drivers build config 00:03:11.729 crypto/caam_jr: not in enabled drivers build config 00:03:11.729 crypto/ccp: not in enabled drivers build config 00:03:11.729 crypto/cnxk: not in enabled drivers build config 00:03:11.729 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.729 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.729 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.729 crypto/mlx5: not in enabled drivers build config 00:03:11.729 crypto/mvsam: not in enabled drivers build config 00:03:11.729 crypto/nitrox: not in enabled drivers build config 00:03:11.729 crypto/null: not in enabled drivers build config 00:03:11.729 crypto/octeontx: not in enabled drivers build config 00:03:11.729 crypto/openssl: not in enabled drivers build config 00:03:11.729 crypto/scheduler: not in enabled drivers build config 00:03:11.729 crypto/uadk: not in enabled drivers build config 00:03:11.729 crypto/virtio: not in enabled drivers build config 00:03:11.729 compress/isal: not in enabled drivers build config 00:03:11.729 compress/mlx5: not in enabled drivers build config 00:03:11.729 compress/nitrox: not in enabled drivers build config 00:03:11.729 compress/octeontx: not in enabled drivers build config 00:03:11.729 compress/zlib: not in enabled drivers build config 00:03:11.729 regex/*: missing internal dependency, "regexdev" 00:03:11.729 ml/*: missing internal dependency, "mldev" 00:03:11.729 vdpa/ifc: not in enabled drivers build config 00:03:11.729 vdpa/mlx5: not in enabled drivers build config 00:03:11.729 vdpa/nfp: not in enabled drivers build config 00:03:11.729 vdpa/sfc: not in enabled drivers build config 00:03:11.729 event/*: missing internal dependency, "eventdev" 00:03:11.729 baseband/*: missing internal dependency, "bbdev" 00:03:11.729 gpu/*: missing internal dependency, "gpudev" 00:03:11.729 00:03:11.729 00:03:11.729 Build targets in project: 85 00:03:11.729 00:03:11.729 DPDK 24.03.0 00:03:11.729 00:03:11.729 User defined options 00:03:11.729 buildtype : debug 00:03:11.729 default_library : shared 00:03:11.729 libdir : lib 00:03:11.729 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:11.729 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:11.729 c_link_args : 00:03:11.729 cpu_instruction_set: native 00:03:11.729 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:11.729 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:11.729 enable_docs : false 00:03:11.729 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:11.729 enable_kmods : false 00:03:11.729 max_lcores : 128 00:03:11.729 tests : false 00:03:11.729 00:03:11.729 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.729 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:11.729 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.729 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.729 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.729 [4/268] Linking static target lib/librte_kvargs.a 00:03:11.729 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.729 [6/268] Linking static target lib/librte_log.a 00:03:12.298 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.298 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:12.298 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:12.298 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:12.298 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.557 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:12.557 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:12.557 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:12.557 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:12.557 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:12.557 [17/268] Linking static target lib/librte_telemetry.a 00:03:12.557 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.557 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.816 [20/268] Linking target lib/librte_log.so.24.1 00:03:13.075 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:13.075 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:13.364 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:13.364 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.364 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.364 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:13.364 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:13.364 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.364 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:13.364 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:13.662 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:13.662 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:13.662 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:13.662 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.662 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:13.662 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:13.921 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.180 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.180 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.439 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:14.439 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:14.439 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:14.439 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:14.439 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:14.439 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:14.439 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.698 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:14.698 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:14.698 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:14.957 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.957 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.957 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.216 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:15.216 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:15.475 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:15.475 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.475 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.733 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:15.733 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:15.733 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.733 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.992 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:15.992 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:16.251 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:16.251 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:16.251 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:16.511 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:16.511 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:16.511 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:16.511 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.770 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.770 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:16.770 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:17.029 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:17.029 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:17.029 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:17.288 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:17.288 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:17.288 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:17.288 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:17.288 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.547 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:17.547 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.547 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.547 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.547 [86/268] Linking static target lib/librte_ring.a 00:03:17.806 [87/268] Linking static target lib/librte_eal.a 00:03:17.806 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:17.806 [89/268] Linking static target lib/librte_rcu.a 00:03:17.806 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.806 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:18.065 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:18.065 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:18.065 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.325 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:18.325 [96/268] Linking static target lib/librte_mempool.a 00:03:18.325 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:18.325 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.325 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:18.325 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:18.583 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.583 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:18.583 [103/268] Linking static target lib/librte_mbuf.a 00:03:18.842 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:18.842 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:18.842 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:19.101 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.101 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:19.101 [109/268] Linking static target lib/librte_net.a 00:03:19.361 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:19.361 [111/268] Linking static target lib/librte_meter.a 00:03:19.361 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:19.620 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.620 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:19.620 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.620 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:19.620 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:19.620 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.620 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.188 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:20.188 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.188 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.188 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.448 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:20.448 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.708 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:20.708 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:20.708 [128/268] Linking static target lib/librte_pci.a 00:03:20.708 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:20.708 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:20.708 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:20.967 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:20.967 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.967 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:20.967 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:20.967 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:21.225 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:21.225 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:21.225 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:21.226 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:21.226 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:21.226 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:21.226 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:21.226 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:21.226 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:21.484 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.484 [147/268] Linking static target lib/librte_ethdev.a 00:03:21.484 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:21.743 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:21.743 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:21.743 [151/268] Linking static target lib/librte_cmdline.a 00:03:21.743 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:21.743 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:22.001 [154/268] Linking static target lib/librte_timer.a 00:03:22.001 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:22.260 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:22.260 [157/268] Linking static target lib/librte_hash.a 00:03:22.260 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:22.520 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:22.520 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:22.520 [161/268] Linking static target lib/librte_compressdev.a 00:03:22.520 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:22.520 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.779 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:22.779 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.038 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:23.299 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:23.299 [168/268] Linking static target lib/librte_dmadev.a 00:03:23.299 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:23.299 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:23.299 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:23.299 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.299 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.558 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:23.558 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.558 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.558 [177/268] Linking static target lib/librte_cryptodev.a 00:03:23.816 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:24.075 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:24.075 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:24.075 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:24.075 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:24.075 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.075 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:24.334 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:24.334 [186/268] Linking static target lib/librte_power.a 00:03:24.592 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:24.851 [188/268] Linking static target lib/librte_reorder.a 00:03:24.851 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:24.851 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:25.111 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:25.111 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:25.111 [193/268] Linking static target lib/librte_security.a 00:03:25.111 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:25.370 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.629 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.888 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:25.888 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:25.888 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.888 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.888 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:26.157 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.430 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.430 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.430 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.689 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:26.689 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.689 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.689 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.689 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.689 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.689 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.949 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.949 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.949 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.949 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.949 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:26.949 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.949 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.208 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:27.208 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:27.208 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:27.208 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:27.208 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.208 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.208 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.208 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:27.466 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.033 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.033 [230/268] Linking static target lib/librte_vhost.a 00:03:28.970 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.970 [232/268] Linking target lib/librte_eal.so.24.1 00:03:29.229 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:29.229 [234/268] Linking target lib/librte_meter.so.24.1 00:03:29.229 [235/268] Linking target lib/librte_pci.so.24.1 00:03:29.229 [236/268] Linking target lib/librte_ring.so.24.1 00:03:29.229 [237/268] Linking target lib/librte_timer.so.24.1 00:03:29.229 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:29.229 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:29.229 [240/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.229 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:29.229 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:29.488 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:29.488 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:29.488 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:29.488 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:29.488 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:29.488 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:29.489 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.489 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:29.489 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:29.489 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:29.489 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:29.748 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:29.748 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:29.748 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:29.748 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:29.748 [258/268] Linking target lib/librte_net.so.24.1 00:03:30.007 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:30.007 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:30.007 [261/268] Linking target lib/librte_hash.so.24.1 00:03:30.007 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:30.007 [263/268] Linking target lib/librte_security.so.24.1 00:03:30.007 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:30.007 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:30.007 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:30.265 [267/268] Linking target lib/librte_power.so.24.1 00:03:30.265 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:30.265 INFO: autodetecting backend as ninja 00:03:30.266 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:02.341 CC lib/ut_mock/mock.o 00:04:02.341 CC lib/ut/ut.o 00:04:02.341 CC lib/log/log.o 00:04:02.341 CC lib/log/log_flags.o 00:04:02.341 CC lib/log/log_deprecated.o 00:04:02.341 LIB libspdk_log.a 00:04:02.341 LIB libspdk_ut_mock.a 00:04:02.341 LIB libspdk_ut.a 00:04:02.341 SO libspdk_ut_mock.so.6.0 00:04:02.341 SO libspdk_log.so.7.1 00:04:02.341 SO libspdk_ut.so.2.0 00:04:02.341 SYMLINK libspdk_ut_mock.so 00:04:02.341 SYMLINK libspdk_ut.so 00:04:02.341 SYMLINK libspdk_log.so 00:04:02.341 CXX lib/trace_parser/trace.o 00:04:02.341 CC lib/ioat/ioat.o 00:04:02.341 CC lib/dma/dma.o 00:04:02.341 CC lib/util/base64.o 00:04:02.341 CC lib/util/cpuset.o 00:04:02.341 CC lib/util/bit_array.o 00:04:02.341 CC lib/util/crc16.o 00:04:02.341 CC lib/util/crc32.o 00:04:02.341 CC lib/util/crc32c.o 00:04:02.341 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.341 CC lib/vfio_user/host/vfio_user.o 00:04:02.341 CC lib/util/crc32_ieee.o 00:04:02.341 CC lib/util/crc64.o 00:04:02.341 CC lib/util/dif.o 00:04:02.341 LIB libspdk_dma.a 00:04:02.341 SO libspdk_dma.so.5.0 00:04:02.341 CC lib/util/fd.o 00:04:02.341 CC lib/util/fd_group.o 00:04:02.341 SYMLINK libspdk_dma.so 00:04:02.341 CC lib/util/file.o 00:04:02.341 CC lib/util/hexlify.o 00:04:02.341 LIB libspdk_ioat.a 00:04:02.341 CC lib/util/iov.o 00:04:02.341 CC lib/util/math.o 00:04:02.341 SO libspdk_ioat.so.7.0 00:04:02.341 LIB libspdk_vfio_user.a 00:04:02.341 SYMLINK libspdk_ioat.so 00:04:02.341 CC lib/util/net.o 00:04:02.341 CC lib/util/pipe.o 00:04:02.341 CC lib/util/strerror_tls.o 00:04:02.341 SO libspdk_vfio_user.so.5.0 00:04:02.341 CC lib/util/string.o 00:04:02.341 CC lib/util/uuid.o 00:04:02.341 SYMLINK libspdk_vfio_user.so 00:04:02.341 CC lib/util/xor.o 00:04:02.341 CC lib/util/zipf.o 00:04:02.341 CC lib/util/md5.o 00:04:02.341 LIB libspdk_util.a 00:04:02.341 SO libspdk_util.so.10.1 00:04:02.341 SYMLINK libspdk_util.so 00:04:02.341 LIB libspdk_trace_parser.a 00:04:02.341 SO libspdk_trace_parser.so.6.0 00:04:02.341 SYMLINK libspdk_trace_parser.so 00:04:02.341 CC lib/vmd/vmd.o 00:04:02.341 CC lib/vmd/led.o 00:04:02.341 CC lib/json/json_parse.o 00:04:02.341 CC lib/rdma_utils/rdma_utils.o 00:04:02.341 CC lib/idxd/idxd.o 00:04:02.341 CC lib/json/json_util.o 00:04:02.341 CC lib/idxd/idxd_user.o 00:04:02.341 CC lib/json/json_write.o 00:04:02.341 CC lib/env_dpdk/env.o 00:04:02.341 CC lib/conf/conf.o 00:04:02.341 CC lib/env_dpdk/memory.o 00:04:02.341 CC lib/env_dpdk/pci.o 00:04:02.341 LIB libspdk_conf.a 00:04:02.341 CC lib/idxd/idxd_kernel.o 00:04:02.341 CC lib/env_dpdk/init.o 00:04:02.341 LIB libspdk_rdma_utils.a 00:04:02.341 SO libspdk_conf.so.6.0 00:04:02.341 SO libspdk_rdma_utils.so.1.0 00:04:02.341 LIB libspdk_json.a 00:04:02.341 SYMLINK libspdk_conf.so 00:04:02.341 SO libspdk_json.so.6.0 00:04:02.341 CC lib/env_dpdk/threads.o 00:04:02.341 SYMLINK libspdk_rdma_utils.so 00:04:02.341 CC lib/env_dpdk/pci_ioat.o 00:04:02.341 SYMLINK libspdk_json.so 00:04:02.341 CC lib/env_dpdk/pci_virtio.o 00:04:02.341 CC lib/env_dpdk/pci_vmd.o 00:04:02.341 CC lib/env_dpdk/pci_idxd.o 00:04:02.341 CC lib/env_dpdk/pci_event.o 00:04:02.341 CC lib/env_dpdk/sigbus_handler.o 00:04:02.341 CC lib/env_dpdk/pci_dpdk.o 00:04:02.341 LIB libspdk_idxd.a 00:04:02.341 LIB libspdk_vmd.a 00:04:02.341 SO libspdk_idxd.so.12.1 00:04:02.341 SO libspdk_vmd.so.6.0 00:04:02.341 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.341 CC lib/rdma_provider/common.o 00:04:02.341 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:02.341 SYMLINK libspdk_idxd.so 00:04:02.341 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:02.341 SYMLINK libspdk_vmd.so 00:04:02.341 LIB libspdk_rdma_provider.a 00:04:02.341 CC lib/jsonrpc/jsonrpc_client.o 00:04:02.341 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:02.341 CC lib/jsonrpc/jsonrpc_server.o 00:04:02.341 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:02.341 SO libspdk_rdma_provider.so.7.0 00:04:02.341 SYMLINK libspdk_rdma_provider.so 00:04:02.341 LIB libspdk_jsonrpc.a 00:04:02.341 SO libspdk_jsonrpc.so.6.0 00:04:02.341 SYMLINK libspdk_jsonrpc.so 00:04:02.341 LIB libspdk_env_dpdk.a 00:04:02.341 SO libspdk_env_dpdk.so.15.1 00:04:02.341 CC lib/rpc/rpc.o 00:04:02.341 SYMLINK libspdk_env_dpdk.so 00:04:02.341 LIB libspdk_rpc.a 00:04:02.341 SO libspdk_rpc.so.6.0 00:04:02.341 SYMLINK libspdk_rpc.so 00:04:02.341 CC lib/keyring/keyring.o 00:04:02.341 CC lib/keyring/keyring_rpc.o 00:04:02.341 CC lib/trace/trace_flags.o 00:04:02.341 CC lib/trace/trace_rpc.o 00:04:02.341 CC lib/trace/trace.o 00:04:02.341 CC lib/notify/notify.o 00:04:02.341 CC lib/notify/notify_rpc.o 00:04:02.341 LIB libspdk_notify.a 00:04:02.341 SO libspdk_notify.so.6.0 00:04:02.341 LIB libspdk_trace.a 00:04:02.341 LIB libspdk_keyring.a 00:04:02.341 SYMLINK libspdk_notify.so 00:04:02.341 SO libspdk_trace.so.11.0 00:04:02.341 SO libspdk_keyring.so.2.0 00:04:02.341 SYMLINK libspdk_trace.so 00:04:02.341 SYMLINK libspdk_keyring.so 00:04:02.341 CC lib/sock/sock.o 00:04:02.341 CC lib/sock/sock_rpc.o 00:04:02.341 CC lib/thread/thread.o 00:04:02.341 CC lib/thread/iobuf.o 00:04:02.600 LIB libspdk_sock.a 00:04:02.600 SO libspdk_sock.so.10.0 00:04:02.600 SYMLINK libspdk_sock.so 00:04:03.168 CC lib/nvme/nvme_ctrlr.o 00:04:03.168 CC lib/nvme/nvme_fabric.o 00:04:03.168 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.168 CC lib/nvme/nvme_ns_cmd.o 00:04:03.168 CC lib/nvme/nvme_pcie.o 00:04:03.168 CC lib/nvme/nvme_ns.o 00:04:03.168 CC lib/nvme/nvme_pcie_common.o 00:04:03.168 CC lib/nvme/nvme_qpair.o 00:04:03.168 CC lib/nvme/nvme.o 00:04:03.746 LIB libspdk_thread.a 00:04:03.746 SO libspdk_thread.so.11.0 00:04:03.746 CC lib/nvme/nvme_quirks.o 00:04:03.746 CC lib/nvme/nvme_transport.o 00:04:03.746 CC lib/nvme/nvme_discovery.o 00:04:03.746 SYMLINK libspdk_thread.so 00:04:03.746 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.023 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.023 CC lib/nvme/nvme_tcp.o 00:04:04.023 CC lib/nvme/nvme_opal.o 00:04:04.318 CC lib/accel/accel.o 00:04:04.318 CC lib/accel/accel_rpc.o 00:04:04.318 CC lib/accel/accel_sw.o 00:04:04.576 CC lib/nvme/nvme_io_msg.o 00:04:04.576 CC lib/nvme/nvme_poll_group.o 00:04:04.576 CC lib/nvme/nvme_zns.o 00:04:04.576 CC lib/blob/blobstore.o 00:04:04.576 CC lib/nvme/nvme_stubs.o 00:04:04.576 CC lib/nvme/nvme_auth.o 00:04:04.836 CC lib/blob/request.o 00:04:05.095 CC lib/blob/zeroes.o 00:04:05.095 CC lib/blob/blob_bs_dev.o 00:04:05.095 CC lib/nvme/nvme_cuse.o 00:04:05.095 CC lib/nvme/nvme_rdma.o 00:04:05.354 LIB libspdk_accel.a 00:04:05.354 SO libspdk_accel.so.16.0 00:04:05.354 CC lib/init/json_config.o 00:04:05.354 CC lib/init/subsystem.o 00:04:05.354 SYMLINK libspdk_accel.so 00:04:05.354 CC lib/virtio/virtio.o 00:04:05.354 CC lib/init/subsystem_rpc.o 00:04:05.612 CC lib/init/rpc.o 00:04:05.612 CC lib/virtio/virtio_vhost_user.o 00:04:05.612 CC lib/virtio/virtio_vfio_user.o 00:04:05.612 CC lib/fsdev/fsdev.o 00:04:05.612 CC lib/virtio/virtio_pci.o 00:04:05.871 CC lib/fsdev/fsdev_io.o 00:04:05.871 CC lib/bdev/bdev.o 00:04:05.871 LIB libspdk_init.a 00:04:05.871 SO libspdk_init.so.6.0 00:04:05.871 SYMLINK libspdk_init.so 00:04:05.871 CC lib/fsdev/fsdev_rpc.o 00:04:05.871 CC lib/bdev/bdev_rpc.o 00:04:05.871 CC lib/bdev/bdev_zone.o 00:04:06.129 LIB libspdk_virtio.a 00:04:06.129 SO libspdk_virtio.so.7.0 00:04:06.129 SYMLINK libspdk_virtio.so 00:04:06.129 CC lib/bdev/part.o 00:04:06.129 CC lib/bdev/scsi_nvme.o 00:04:06.387 CC lib/event/app.o 00:04:06.387 CC lib/event/log_rpc.o 00:04:06.387 CC lib/event/app_rpc.o 00:04:06.387 CC lib/event/reactor.o 00:04:06.387 LIB libspdk_fsdev.a 00:04:06.387 CC lib/event/scheduler_static.o 00:04:06.387 SO libspdk_fsdev.so.2.0 00:04:06.387 SYMLINK libspdk_fsdev.so 00:04:06.646 LIB libspdk_nvme.a 00:04:06.646 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:06.646 LIB libspdk_event.a 00:04:06.905 SO libspdk_event.so.14.0 00:04:06.905 SO libspdk_nvme.so.15.0 00:04:06.905 SYMLINK libspdk_event.so 00:04:07.163 SYMLINK libspdk_nvme.so 00:04:07.422 LIB libspdk_fuse_dispatcher.a 00:04:07.422 SO libspdk_fuse_dispatcher.so.1.0 00:04:07.422 SYMLINK libspdk_fuse_dispatcher.so 00:04:07.682 LIB libspdk_blob.a 00:04:07.682 SO libspdk_blob.so.12.0 00:04:07.682 SYMLINK libspdk_blob.so 00:04:07.942 CC lib/lvol/lvol.o 00:04:07.942 CC lib/blobfs/blobfs.o 00:04:07.942 CC lib/blobfs/tree.o 00:04:08.879 LIB libspdk_bdev.a 00:04:08.879 SO libspdk_bdev.so.17.0 00:04:08.879 SYMLINK libspdk_bdev.so 00:04:08.879 LIB libspdk_blobfs.a 00:04:08.879 LIB libspdk_lvol.a 00:04:08.879 SO libspdk_blobfs.so.11.0 00:04:08.879 SO libspdk_lvol.so.11.0 00:04:09.137 CC lib/nbd/nbd.o 00:04:09.137 CC lib/nbd/nbd_rpc.o 00:04:09.137 CC lib/ftl/ftl_core.o 00:04:09.137 CC lib/scsi/dev.o 00:04:09.137 CC lib/scsi/lun.o 00:04:09.137 CC lib/ftl/ftl_init.o 00:04:09.137 CC lib/ublk/ublk.o 00:04:09.137 CC lib/nvmf/ctrlr.o 00:04:09.137 SYMLINK libspdk_lvol.so 00:04:09.137 CC lib/nvmf/ctrlr_discovery.o 00:04:09.137 SYMLINK libspdk_blobfs.so 00:04:09.137 CC lib/nvmf/ctrlr_bdev.o 00:04:09.137 CC lib/nvmf/subsystem.o 00:04:09.394 CC lib/nvmf/nvmf.o 00:04:09.394 CC lib/ftl/ftl_layout.o 00:04:09.394 CC lib/scsi/port.o 00:04:09.394 LIB libspdk_nbd.a 00:04:09.394 CC lib/ftl/ftl_debug.o 00:04:09.394 SO libspdk_nbd.so.7.0 00:04:09.652 CC lib/scsi/scsi.o 00:04:09.652 CC lib/nvmf/nvmf_rpc.o 00:04:09.652 SYMLINK libspdk_nbd.so 00:04:09.652 CC lib/nvmf/transport.o 00:04:09.652 CC lib/nvmf/tcp.o 00:04:09.652 CC lib/ftl/ftl_io.o 00:04:09.652 CC lib/ublk/ublk_rpc.o 00:04:09.652 CC lib/scsi/scsi_bdev.o 00:04:09.652 CC lib/scsi/scsi_pr.o 00:04:09.911 LIB libspdk_ublk.a 00:04:09.911 SO libspdk_ublk.so.3.0 00:04:09.911 CC lib/ftl/ftl_sb.o 00:04:09.911 SYMLINK libspdk_ublk.so 00:04:09.911 CC lib/ftl/ftl_l2p.o 00:04:10.169 CC lib/ftl/ftl_l2p_flat.o 00:04:10.169 CC lib/scsi/scsi_rpc.o 00:04:10.169 CC lib/ftl/ftl_nv_cache.o 00:04:10.169 CC lib/ftl/ftl_band.o 00:04:10.169 CC lib/ftl/ftl_band_ops.o 00:04:10.169 CC lib/nvmf/stubs.o 00:04:10.169 CC lib/nvmf/mdns_server.o 00:04:10.436 CC lib/scsi/task.o 00:04:10.436 CC lib/nvmf/rdma.o 00:04:10.436 CC lib/nvmf/auth.o 00:04:10.436 LIB libspdk_scsi.a 00:04:10.695 CC lib/ftl/ftl_writer.o 00:04:10.695 SO libspdk_scsi.so.9.0 00:04:10.695 CC lib/ftl/ftl_rq.o 00:04:10.695 SYMLINK libspdk_scsi.so 00:04:10.695 CC lib/ftl/ftl_reloc.o 00:04:10.695 CC lib/ftl/ftl_l2p_cache.o 00:04:10.695 CC lib/ftl/ftl_p2l.o 00:04:10.953 CC lib/ftl/ftl_p2l_log.o 00:04:10.953 CC lib/ftl/mngt/ftl_mngt.o 00:04:10.953 CC lib/iscsi/conn.o 00:04:11.211 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:11.211 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:11.211 CC lib/iscsi/init_grp.o 00:04:11.211 CC lib/iscsi/iscsi.o 00:04:11.211 CC lib/vhost/vhost.o 00:04:11.211 CC lib/iscsi/param.o 00:04:11.211 CC lib/iscsi/portal_grp.o 00:04:11.211 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:11.211 CC lib/iscsi/tgt_node.o 00:04:11.470 CC lib/iscsi/iscsi_subsystem.o 00:04:11.470 CC lib/iscsi/iscsi_rpc.o 00:04:11.470 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:11.470 CC lib/iscsi/task.o 00:04:11.728 CC lib/vhost/vhost_rpc.o 00:04:11.728 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:11.728 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:11.728 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:11.728 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:11.728 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:11.986 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:11.986 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:11.986 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:11.986 CC lib/ftl/utils/ftl_conf.o 00:04:11.986 CC lib/ftl/utils/ftl_md.o 00:04:11.986 CC lib/ftl/utils/ftl_mempool.o 00:04:11.986 CC lib/ftl/utils/ftl_bitmap.o 00:04:12.245 CC lib/ftl/utils/ftl_property.o 00:04:12.245 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:12.245 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:12.245 CC lib/vhost/vhost_scsi.o 00:04:12.245 CC lib/vhost/vhost_blk.o 00:04:12.245 CC lib/vhost/rte_vhost_user.o 00:04:12.245 LIB libspdk_nvmf.a 00:04:12.502 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:12.502 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:12.502 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:12.502 SO libspdk_nvmf.so.20.0 00:04:12.502 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:12.502 LIB libspdk_iscsi.a 00:04:12.502 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:12.502 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:12.760 SO libspdk_iscsi.so.8.0 00:04:12.760 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:12.760 SYMLINK libspdk_nvmf.so 00:04:12.760 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:12.760 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:12.760 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:12.760 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:12.760 SYMLINK libspdk_iscsi.so 00:04:12.760 CC lib/ftl/base/ftl_base_dev.o 00:04:12.760 CC lib/ftl/base/ftl_base_bdev.o 00:04:13.019 CC lib/ftl/ftl_trace.o 00:04:13.277 LIB libspdk_ftl.a 00:04:13.537 LIB libspdk_vhost.a 00:04:13.537 SO libspdk_ftl.so.9.0 00:04:13.537 SO libspdk_vhost.so.8.0 00:04:13.537 SYMLINK libspdk_vhost.so 00:04:13.800 SYMLINK libspdk_ftl.so 00:04:14.063 CC module/env_dpdk/env_dpdk_rpc.o 00:04:14.063 CC module/blob/bdev/blob_bdev.o 00:04:14.063 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:14.063 CC module/scheduler/gscheduler/gscheduler.o 00:04:14.063 CC module/keyring/file/keyring.o 00:04:14.063 CC module/sock/uring/uring.o 00:04:14.063 CC module/sock/posix/posix.o 00:04:14.063 CC module/accel/error/accel_error.o 00:04:14.063 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:14.063 CC module/fsdev/aio/fsdev_aio.o 00:04:14.063 LIB libspdk_env_dpdk_rpc.a 00:04:14.321 SO libspdk_env_dpdk_rpc.so.6.0 00:04:14.321 SYMLINK libspdk_env_dpdk_rpc.so 00:04:14.321 CC module/accel/error/accel_error_rpc.o 00:04:14.321 CC module/keyring/file/keyring_rpc.o 00:04:14.321 LIB libspdk_scheduler_gscheduler.a 00:04:14.321 LIB libspdk_scheduler_dpdk_governor.a 00:04:14.321 SO libspdk_scheduler_gscheduler.so.4.0 00:04:14.321 LIB libspdk_scheduler_dynamic.a 00:04:14.321 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:14.321 SO libspdk_scheduler_dynamic.so.4.0 00:04:14.321 SYMLINK libspdk_scheduler_gscheduler.so 00:04:14.321 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:14.321 SYMLINK libspdk_scheduler_dynamic.so 00:04:14.321 LIB libspdk_accel_error.a 00:04:14.321 LIB libspdk_keyring_file.a 00:04:14.321 LIB libspdk_blob_bdev.a 00:04:14.580 SO libspdk_keyring_file.so.2.0 00:04:14.581 SO libspdk_blob_bdev.so.12.0 00:04:14.581 SO libspdk_accel_error.so.2.0 00:04:14.581 SYMLINK libspdk_blob_bdev.so 00:04:14.581 SYMLINK libspdk_accel_error.so 00:04:14.581 SYMLINK libspdk_keyring_file.so 00:04:14.581 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:14.581 CC module/fsdev/aio/linux_aio_mgr.o 00:04:14.581 CC module/accel/ioat/accel_ioat.o 00:04:14.581 CC module/accel/iaa/accel_iaa.o 00:04:14.581 CC module/accel/dsa/accel_dsa.o 00:04:14.581 CC module/keyring/linux/keyring.o 00:04:14.840 CC module/keyring/linux/keyring_rpc.o 00:04:14.840 CC module/accel/iaa/accel_iaa_rpc.o 00:04:14.840 CC module/accel/ioat/accel_ioat_rpc.o 00:04:14.840 LIB libspdk_fsdev_aio.a 00:04:14.840 LIB libspdk_sock_uring.a 00:04:14.840 SO libspdk_sock_uring.so.5.0 00:04:14.840 SO libspdk_fsdev_aio.so.1.0 00:04:14.840 LIB libspdk_keyring_linux.a 00:04:14.840 CC module/bdev/delay/vbdev_delay.o 00:04:14.840 LIB libspdk_sock_posix.a 00:04:14.840 LIB libspdk_accel_iaa.a 00:04:14.840 SO libspdk_keyring_linux.so.1.0 00:04:14.840 LIB libspdk_accel_ioat.a 00:04:14.840 SYMLINK libspdk_sock_uring.so 00:04:14.840 SYMLINK libspdk_fsdev_aio.so 00:04:14.840 CC module/accel/dsa/accel_dsa_rpc.o 00:04:14.840 SO libspdk_sock_posix.so.6.0 00:04:14.840 SO libspdk_accel_iaa.so.3.0 00:04:14.840 SO libspdk_accel_ioat.so.6.0 00:04:14.840 SYMLINK libspdk_keyring_linux.so 00:04:14.840 CC module/bdev/error/vbdev_error.o 00:04:14.840 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:15.098 CC module/bdev/gpt/gpt.o 00:04:15.098 SYMLINK libspdk_sock_posix.so 00:04:15.098 SYMLINK libspdk_accel_iaa.so 00:04:15.098 CC module/bdev/error/vbdev_error_rpc.o 00:04:15.098 SYMLINK libspdk_accel_ioat.so 00:04:15.098 CC module/bdev/gpt/vbdev_gpt.o 00:04:15.098 LIB libspdk_accel_dsa.a 00:04:15.098 SO libspdk_accel_dsa.so.5.0 00:04:15.098 CC module/bdev/lvol/vbdev_lvol.o 00:04:15.098 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:15.098 SYMLINK libspdk_accel_dsa.so 00:04:15.098 CC module/bdev/malloc/bdev_malloc.o 00:04:15.098 CC module/blobfs/bdev/blobfs_bdev.o 00:04:15.357 LIB libspdk_bdev_error.a 00:04:15.357 LIB libspdk_bdev_delay.a 00:04:15.357 SO libspdk_bdev_error.so.6.0 00:04:15.357 LIB libspdk_bdev_gpt.a 00:04:15.357 SO libspdk_bdev_delay.so.6.0 00:04:15.357 SO libspdk_bdev_gpt.so.6.0 00:04:15.357 CC module/bdev/passthru/vbdev_passthru.o 00:04:15.357 CC module/bdev/nvme/bdev_nvme.o 00:04:15.357 SYMLINK libspdk_bdev_error.so 00:04:15.357 CC module/bdev/null/bdev_null.o 00:04:15.357 SYMLINK libspdk_bdev_delay.so 00:04:15.357 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:15.357 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:15.357 SYMLINK libspdk_bdev_gpt.so 00:04:15.357 CC module/bdev/null/bdev_null_rpc.o 00:04:15.652 CC module/bdev/raid/bdev_raid.o 00:04:15.652 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:15.652 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.652 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.652 LIB libspdk_blobfs_bdev.a 00:04:15.652 CC module/bdev/raid/raid0.o 00:04:15.652 SO libspdk_blobfs_bdev.so.6.0 00:04:15.652 LIB libspdk_bdev_lvol.a 00:04:15.652 LIB libspdk_bdev_null.a 00:04:15.652 LIB libspdk_bdev_passthru.a 00:04:15.652 SYMLINK libspdk_blobfs_bdev.so 00:04:15.652 SO libspdk_bdev_lvol.so.6.0 00:04:15.652 SO libspdk_bdev_null.so.6.0 00:04:15.652 CC module/bdev/raid/raid1.o 00:04:15.652 SO libspdk_bdev_passthru.so.6.0 00:04:15.652 LIB libspdk_bdev_malloc.a 00:04:15.652 SO libspdk_bdev_malloc.so.6.0 00:04:15.652 SYMLINK libspdk_bdev_lvol.so 00:04:15.652 SYMLINK libspdk_bdev_null.so 00:04:15.942 SYMLINK libspdk_bdev_passthru.so 00:04:15.942 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:15.942 CC module/bdev/raid/concat.o 00:04:15.942 CC module/bdev/nvme/nvme_rpc.o 00:04:15.942 SYMLINK libspdk_bdev_malloc.so 00:04:15.942 CC module/bdev/nvme/bdev_mdns_client.o 00:04:15.942 CC module/bdev/nvme/vbdev_opal.o 00:04:15.942 CC module/bdev/split/vbdev_split.o 00:04:15.942 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:15.942 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:15.942 CC module/bdev/split/vbdev_split_rpc.o 00:04:15.942 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:16.201 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:16.201 LIB libspdk_bdev_split.a 00:04:16.201 CC module/bdev/uring/bdev_uring.o 00:04:16.201 SO libspdk_bdev_split.so.6.0 00:04:16.201 SYMLINK libspdk_bdev_split.so 00:04:16.201 CC module/bdev/uring/bdev_uring_rpc.o 00:04:16.201 LIB libspdk_bdev_zone_block.a 00:04:16.201 CC module/bdev/aio/bdev_aio.o 00:04:16.201 CC module/bdev/ftl/bdev_ftl.o 00:04:16.201 SO libspdk_bdev_zone_block.so.6.0 00:04:16.459 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:16.459 CC module/bdev/aio/bdev_aio_rpc.o 00:04:16.459 SYMLINK libspdk_bdev_zone_block.so 00:04:16.459 CC module/bdev/iscsi/bdev_iscsi.o 00:04:16.459 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:16.459 LIB libspdk_bdev_raid.a 00:04:16.719 LIB libspdk_bdev_uring.a 00:04:16.719 LIB libspdk_bdev_ftl.a 00:04:16.719 SO libspdk_bdev_raid.so.6.0 00:04:16.719 SO libspdk_bdev_uring.so.6.0 00:04:16.719 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:16.719 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:16.719 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:16.719 SO libspdk_bdev_ftl.so.6.0 00:04:16.719 LIB libspdk_bdev_aio.a 00:04:16.719 SYMLINK libspdk_bdev_uring.so 00:04:16.719 SYMLINK libspdk_bdev_raid.so 00:04:16.719 SYMLINK libspdk_bdev_ftl.so 00:04:16.719 SO libspdk_bdev_aio.so.6.0 00:04:16.719 SYMLINK libspdk_bdev_aio.so 00:04:16.719 LIB libspdk_bdev_iscsi.a 00:04:16.978 SO libspdk_bdev_iscsi.so.6.0 00:04:16.978 SYMLINK libspdk_bdev_iscsi.so 00:04:17.237 LIB libspdk_bdev_virtio.a 00:04:17.237 SO libspdk_bdev_virtio.so.6.0 00:04:17.237 SYMLINK libspdk_bdev_virtio.so 00:04:17.806 LIB libspdk_bdev_nvme.a 00:04:18.064 SO libspdk_bdev_nvme.so.7.1 00:04:18.064 SYMLINK libspdk_bdev_nvme.so 00:04:18.632 CC module/event/subsystems/iobuf/iobuf.o 00:04:18.632 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:18.632 CC module/event/subsystems/scheduler/scheduler.o 00:04:18.632 CC module/event/subsystems/sock/sock.o 00:04:18.632 CC module/event/subsystems/fsdev/fsdev.o 00:04:18.632 CC module/event/subsystems/vmd/vmd.o 00:04:18.632 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:18.632 CC module/event/subsystems/keyring/keyring.o 00:04:18.632 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:18.632 LIB libspdk_event_fsdev.a 00:04:18.632 LIB libspdk_event_vhost_blk.a 00:04:18.632 LIB libspdk_event_scheduler.a 00:04:18.632 LIB libspdk_event_keyring.a 00:04:18.890 LIB libspdk_event_sock.a 00:04:18.890 LIB libspdk_event_vmd.a 00:04:18.890 SO libspdk_event_vhost_blk.so.3.0 00:04:18.890 LIB libspdk_event_iobuf.a 00:04:18.890 SO libspdk_event_scheduler.so.4.0 00:04:18.890 SO libspdk_event_fsdev.so.1.0 00:04:18.890 SO libspdk_event_keyring.so.1.0 00:04:18.890 SO libspdk_event_sock.so.5.0 00:04:18.890 SO libspdk_event_vmd.so.6.0 00:04:18.890 SO libspdk_event_iobuf.so.3.0 00:04:18.890 SYMLINK libspdk_event_vhost_blk.so 00:04:18.890 SYMLINK libspdk_event_fsdev.so 00:04:18.890 SYMLINK libspdk_event_scheduler.so 00:04:18.891 SYMLINK libspdk_event_keyring.so 00:04:18.891 SYMLINK libspdk_event_sock.so 00:04:18.891 SYMLINK libspdk_event_vmd.so 00:04:18.891 SYMLINK libspdk_event_iobuf.so 00:04:19.150 CC module/event/subsystems/accel/accel.o 00:04:19.408 LIB libspdk_event_accel.a 00:04:19.408 SO libspdk_event_accel.so.6.0 00:04:19.408 SYMLINK libspdk_event_accel.so 00:04:19.667 CC module/event/subsystems/bdev/bdev.o 00:04:19.926 LIB libspdk_event_bdev.a 00:04:19.926 SO libspdk_event_bdev.so.6.0 00:04:19.926 SYMLINK libspdk_event_bdev.so 00:04:20.184 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:20.184 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:20.184 CC module/event/subsystems/ublk/ublk.o 00:04:20.184 CC module/event/subsystems/nbd/nbd.o 00:04:20.184 CC module/event/subsystems/scsi/scsi.o 00:04:20.442 LIB libspdk_event_nbd.a 00:04:20.442 LIB libspdk_event_ublk.a 00:04:20.442 LIB libspdk_event_scsi.a 00:04:20.442 SO libspdk_event_nbd.so.6.0 00:04:20.442 SO libspdk_event_ublk.so.3.0 00:04:20.442 SO libspdk_event_scsi.so.6.0 00:04:20.442 SYMLINK libspdk_event_nbd.so 00:04:20.442 LIB libspdk_event_nvmf.a 00:04:20.442 SYMLINK libspdk_event_ublk.so 00:04:20.442 SYMLINK libspdk_event_scsi.so 00:04:20.442 SO libspdk_event_nvmf.so.6.0 00:04:20.701 SYMLINK libspdk_event_nvmf.so 00:04:20.701 CC module/event/subsystems/iscsi/iscsi.o 00:04:20.701 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:20.960 LIB libspdk_event_iscsi.a 00:04:20.960 LIB libspdk_event_vhost_scsi.a 00:04:20.960 SO libspdk_event_vhost_scsi.so.3.0 00:04:20.960 SO libspdk_event_iscsi.so.6.0 00:04:20.960 SYMLINK libspdk_event_vhost_scsi.so 00:04:20.960 SYMLINK libspdk_event_iscsi.so 00:04:21.218 SO libspdk.so.6.0 00:04:21.218 SYMLINK libspdk.so 00:04:21.477 CXX app/trace/trace.o 00:04:21.477 CC app/spdk_lspci/spdk_lspci.o 00:04:21.477 CC app/spdk_nvme_perf/perf.o 00:04:21.477 CC app/trace_record/trace_record.o 00:04:21.477 CC app/spdk_nvme_identify/identify.o 00:04:21.477 CC app/nvmf_tgt/nvmf_main.o 00:04:21.477 CC app/iscsi_tgt/iscsi_tgt.o 00:04:21.477 CC examples/util/zipf/zipf.o 00:04:21.477 CC test/thread/poller_perf/poller_perf.o 00:04:21.736 CC app/spdk_tgt/spdk_tgt.o 00:04:21.736 LINK spdk_lspci 00:04:21.736 LINK nvmf_tgt 00:04:21.736 LINK zipf 00:04:21.736 LINK iscsi_tgt 00:04:21.736 LINK spdk_trace_record 00:04:21.736 LINK poller_perf 00:04:21.736 LINK spdk_tgt 00:04:21.994 LINK spdk_trace 00:04:21.994 CC app/spdk_nvme_discover/discovery_aer.o 00:04:21.994 CC app/spdk_top/spdk_top.o 00:04:22.252 CC examples/ioat/perf/perf.o 00:04:22.252 CC examples/vmd/lsvmd/lsvmd.o 00:04:22.252 CC examples/idxd/perf/perf.o 00:04:22.252 CC examples/vmd/led/led.o 00:04:22.252 LINK spdk_nvme_discover 00:04:22.252 CC test/dma/test_dma/test_dma.o 00:04:22.252 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:22.252 LINK lsvmd 00:04:22.252 LINK ioat_perf 00:04:22.252 LINK led 00:04:22.252 LINK spdk_nvme_identify 00:04:22.511 LINK spdk_nvme_perf 00:04:22.511 LINK interrupt_tgt 00:04:22.511 CC app/spdk_dd/spdk_dd.o 00:04:22.511 LINK idxd_perf 00:04:22.511 CC examples/ioat/verify/verify.o 00:04:22.770 TEST_HEADER include/spdk/accel.h 00:04:22.770 TEST_HEADER include/spdk/accel_module.h 00:04:22.770 TEST_HEADER include/spdk/assert.h 00:04:22.770 TEST_HEADER include/spdk/barrier.h 00:04:22.770 TEST_HEADER include/spdk/base64.h 00:04:22.770 TEST_HEADER include/spdk/bdev.h 00:04:22.770 TEST_HEADER include/spdk/bdev_module.h 00:04:22.770 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.770 TEST_HEADER include/spdk/bit_array.h 00:04:22.770 TEST_HEADER include/spdk/bit_pool.h 00:04:22.770 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.770 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.770 CC test/app/bdev_svc/bdev_svc.o 00:04:22.770 TEST_HEADER include/spdk/blobfs.h 00:04:22.770 TEST_HEADER include/spdk/blob.h 00:04:22.770 TEST_HEADER include/spdk/conf.h 00:04:22.770 CC app/fio/nvme/fio_plugin.o 00:04:22.770 TEST_HEADER include/spdk/config.h 00:04:22.770 TEST_HEADER include/spdk/cpuset.h 00:04:22.770 TEST_HEADER include/spdk/crc16.h 00:04:22.770 TEST_HEADER include/spdk/crc32.h 00:04:22.770 TEST_HEADER include/spdk/crc64.h 00:04:22.770 TEST_HEADER include/spdk/dif.h 00:04:22.770 TEST_HEADER include/spdk/dma.h 00:04:22.770 TEST_HEADER include/spdk/endian.h 00:04:22.770 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.770 TEST_HEADER include/spdk/env.h 00:04:22.770 TEST_HEADER include/spdk/event.h 00:04:22.770 TEST_HEADER include/spdk/fd_group.h 00:04:22.770 TEST_HEADER include/spdk/fd.h 00:04:22.770 LINK test_dma 00:04:22.770 TEST_HEADER include/spdk/file.h 00:04:22.770 TEST_HEADER include/spdk/fsdev.h 00:04:22.770 CC examples/thread/thread/thread_ex.o 00:04:22.770 TEST_HEADER include/spdk/fsdev_module.h 00:04:22.770 TEST_HEADER include/spdk/ftl.h 00:04:22.770 CC examples/sock/hello_world/hello_sock.o 00:04:22.771 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:22.771 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.771 TEST_HEADER include/spdk/hexlify.h 00:04:22.771 TEST_HEADER include/spdk/histogram_data.h 00:04:22.771 TEST_HEADER include/spdk/idxd.h 00:04:22.771 LINK verify 00:04:22.771 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.771 TEST_HEADER include/spdk/init.h 00:04:22.771 TEST_HEADER include/spdk/ioat.h 00:04:22.771 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.771 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.771 TEST_HEADER include/spdk/json.h 00:04:22.771 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.771 TEST_HEADER include/spdk/keyring.h 00:04:22.771 TEST_HEADER include/spdk/keyring_module.h 00:04:22.771 TEST_HEADER include/spdk/likely.h 00:04:22.771 TEST_HEADER include/spdk/log.h 00:04:22.771 TEST_HEADER include/spdk/lvol.h 00:04:22.771 TEST_HEADER include/spdk/md5.h 00:04:22.771 TEST_HEADER include/spdk/memory.h 00:04:22.771 TEST_HEADER include/spdk/mmio.h 00:04:22.771 TEST_HEADER include/spdk/nbd.h 00:04:22.771 TEST_HEADER include/spdk/net.h 00:04:22.771 TEST_HEADER include/spdk/notify.h 00:04:22.771 TEST_HEADER include/spdk/nvme.h 00:04:22.771 TEST_HEADER include/spdk/nvme_intel.h 00:04:22.771 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:22.771 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:22.771 TEST_HEADER include/spdk/nvme_spec.h 00:04:22.771 TEST_HEADER include/spdk/nvme_zns.h 00:04:22.771 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:22.771 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:22.771 TEST_HEADER include/spdk/nvmf.h 00:04:22.771 TEST_HEADER include/spdk/nvmf_spec.h 00:04:22.771 TEST_HEADER include/spdk/nvmf_transport.h 00:04:22.771 TEST_HEADER include/spdk/opal.h 00:04:22.771 TEST_HEADER include/spdk/opal_spec.h 00:04:22.771 TEST_HEADER include/spdk/pci_ids.h 00:04:22.771 TEST_HEADER include/spdk/pipe.h 00:04:22.771 TEST_HEADER include/spdk/queue.h 00:04:22.771 TEST_HEADER include/spdk/reduce.h 00:04:22.771 TEST_HEADER include/spdk/rpc.h 00:04:22.771 TEST_HEADER include/spdk/scheduler.h 00:04:22.771 TEST_HEADER include/spdk/scsi.h 00:04:22.771 TEST_HEADER include/spdk/scsi_spec.h 00:04:22.771 TEST_HEADER include/spdk/sock.h 00:04:22.771 TEST_HEADER include/spdk/stdinc.h 00:04:22.771 TEST_HEADER include/spdk/string.h 00:04:22.771 TEST_HEADER include/spdk/thread.h 00:04:22.771 TEST_HEADER include/spdk/trace.h 00:04:22.771 TEST_HEADER include/spdk/trace_parser.h 00:04:22.771 TEST_HEADER include/spdk/tree.h 00:04:22.771 TEST_HEADER include/spdk/ublk.h 00:04:22.771 TEST_HEADER include/spdk/util.h 00:04:22.771 TEST_HEADER include/spdk/uuid.h 00:04:22.771 TEST_HEADER include/spdk/version.h 00:04:22.771 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:22.771 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:22.771 TEST_HEADER include/spdk/vhost.h 00:04:22.771 TEST_HEADER include/spdk/vmd.h 00:04:22.771 TEST_HEADER include/spdk/xor.h 00:04:22.771 TEST_HEADER include/spdk/zipf.h 00:04:22.771 CXX test/cpp_headers/accel.o 00:04:23.029 LINK bdev_svc 00:04:23.030 LINK spdk_top 00:04:23.030 LINK spdk_dd 00:04:23.030 CC test/env/mem_callbacks/mem_callbacks.o 00:04:23.030 LINK hello_sock 00:04:23.030 LINK thread 00:04:23.030 CXX test/cpp_headers/accel_module.o 00:04:23.030 CC app/vhost/vhost.o 00:04:23.030 CC test/event/event_perf/event_perf.o 00:04:23.288 CC test/rpc_client/rpc_client_test.o 00:04:23.288 LINK event_perf 00:04:23.288 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:23.288 CXX test/cpp_headers/assert.o 00:04:23.288 LINK spdk_nvme 00:04:23.288 LINK vhost 00:04:23.288 LINK rpc_client_test 00:04:23.288 CC test/accel/dif/dif.o 00:04:23.288 CC test/blobfs/mkfs/mkfs.o 00:04:23.288 CC examples/nvme/hello_world/hello_world.o 00:04:23.547 CXX test/cpp_headers/barrier.o 00:04:23.547 CC test/event/reactor/reactor.o 00:04:23.547 CC app/fio/bdev/fio_plugin.o 00:04:23.547 CXX test/cpp_headers/base64.o 00:04:23.547 CXX test/cpp_headers/bdev.o 00:04:23.547 LINK mem_callbacks 00:04:23.547 LINK reactor 00:04:23.547 LINK mkfs 00:04:23.547 LINK hello_world 00:04:23.804 LINK nvme_fuzz 00:04:23.804 CXX test/cpp_headers/bdev_module.o 00:04:23.804 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:23.804 CC test/env/vtophys/vtophys.o 00:04:23.804 CC test/event/reactor_perf/reactor_perf.o 00:04:23.804 CC test/lvol/esnap/esnap.o 00:04:23.804 CC examples/nvme/reconnect/reconnect.o 00:04:24.062 LINK vtophys 00:04:24.062 CXX test/cpp_headers/bdev_zone.o 00:04:24.062 LINK spdk_bdev 00:04:24.062 CC test/nvme/aer/aer.o 00:04:24.062 LINK reactor_perf 00:04:24.062 LINK dif 00:04:24.062 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:24.321 CC test/nvme/reset/reset.o 00:04:24.321 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:24.321 CXX test/cpp_headers/bit_array.o 00:04:24.321 LINK reconnect 00:04:24.321 CC test/nvme/sgl/sgl.o 00:04:24.321 CC test/event/app_repeat/app_repeat.o 00:04:24.321 LINK aer 00:04:24.321 CXX test/cpp_headers/bit_pool.o 00:04:24.321 LINK env_dpdk_post_init 00:04:24.321 LINK hello_fsdev 00:04:24.579 LINK app_repeat 00:04:24.579 LINK reset 00:04:24.579 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:24.579 CXX test/cpp_headers/blob_bdev.o 00:04:24.579 CC examples/nvme/arbitration/arbitration.o 00:04:24.579 LINK sgl 00:04:24.579 CC test/env/memory/memory_ut.o 00:04:24.579 CC test/env/pci/pci_ut.o 00:04:24.838 CC test/app/histogram_perf/histogram_perf.o 00:04:24.838 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.838 CC test/event/scheduler/scheduler.o 00:04:24.838 CC test/nvme/e2edp/nvme_dp.o 00:04:24.838 LINK histogram_perf 00:04:24.838 LINK arbitration 00:04:24.838 CXX test/cpp_headers/blobfs.o 00:04:25.097 LINK scheduler 00:04:25.097 LINK nvme_manage 00:04:25.097 LINK pci_ut 00:04:25.097 CC test/app/jsoncat/jsoncat.o 00:04:25.097 LINK nvme_dp 00:04:25.097 CXX test/cpp_headers/blob.o 00:04:25.097 CC test/app/stub/stub.o 00:04:25.355 CC examples/nvme/hotplug/hotplug.o 00:04:25.355 CC test/nvme/overhead/overhead.o 00:04:25.355 LINK jsoncat 00:04:25.355 CXX test/cpp_headers/conf.o 00:04:25.355 LINK stub 00:04:25.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:25.355 LINK iscsi_fuzz 00:04:25.355 CC examples/accel/perf/accel_perf.o 00:04:25.355 CXX test/cpp_headers/config.o 00:04:25.614 LINK hotplug 00:04:25.614 CXX test/cpp_headers/cpuset.o 00:04:25.614 LINK overhead 00:04:25.614 CXX test/cpp_headers/crc16.o 00:04:25.614 CC test/nvme/err_injection/err_injection.o 00:04:25.872 CC test/bdev/bdevio/bdevio.o 00:04:25.872 CC examples/blob/hello_world/hello_blob.o 00:04:25.872 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.872 LINK memory_ut 00:04:25.872 LINK vhost_fuzz 00:04:25.872 CC examples/blob/cli/blobcli.o 00:04:25.872 CXX test/cpp_headers/crc32.o 00:04:25.872 LINK err_injection 00:04:25.872 LINK cmb_copy 00:04:25.872 LINK accel_perf 00:04:26.130 LINK hello_blob 00:04:26.130 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.130 CC examples/nvme/abort/abort.o 00:04:26.130 CXX test/cpp_headers/crc64.o 00:04:26.130 LINK bdevio 00:04:26.130 CXX test/cpp_headers/dif.o 00:04:26.130 CC test/nvme/startup/startup.o 00:04:26.130 LINK pmr_persistence 00:04:26.130 CC test/nvme/reserve/reserve.o 00:04:26.389 CXX test/cpp_headers/dma.o 00:04:26.389 CXX test/cpp_headers/endian.o 00:04:26.389 CC test/nvme/simple_copy/simple_copy.o 00:04:26.389 LINK blobcli 00:04:26.389 LINK startup 00:04:26.389 LINK abort 00:04:26.389 LINK reserve 00:04:26.389 CC examples/bdev/hello_world/hello_bdev.o 00:04:26.389 CXX test/cpp_headers/env_dpdk.o 00:04:26.647 CXX test/cpp_headers/env.o 00:04:26.647 CC examples/bdev/bdevperf/bdevperf.o 00:04:26.647 LINK simple_copy 00:04:26.647 CC test/nvme/connect_stress/connect_stress.o 00:04:26.647 CXX test/cpp_headers/event.o 00:04:26.647 CC test/nvme/boot_partition/boot_partition.o 00:04:26.647 CC test/nvme/compliance/nvme_compliance.o 00:04:26.647 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.647 LINK hello_bdev 00:04:26.647 LINK connect_stress 00:04:26.906 CXX test/cpp_headers/fd_group.o 00:04:26.906 LINK boot_partition 00:04:26.906 CC test/nvme/fdp/fdp.o 00:04:26.906 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.906 CXX test/cpp_headers/fd.o 00:04:26.906 CXX test/cpp_headers/file.o 00:04:26.906 CXX test/cpp_headers/fsdev.o 00:04:26.906 LINK fused_ordering 00:04:26.906 CC test/nvme/cuse/cuse.o 00:04:26.906 LINK doorbell_aers 00:04:27.165 LINK nvme_compliance 00:04:27.165 CXX test/cpp_headers/fsdev_module.o 00:04:27.165 LINK fdp 00:04:27.165 CXX test/cpp_headers/ftl.o 00:04:27.165 CXX test/cpp_headers/fuse_dispatcher.o 00:04:27.165 CXX test/cpp_headers/gpt_spec.o 00:04:27.165 CXX test/cpp_headers/hexlify.o 00:04:27.165 CXX test/cpp_headers/histogram_data.o 00:04:27.165 CXX test/cpp_headers/idxd.o 00:04:27.165 CXX test/cpp_headers/idxd_spec.o 00:04:27.165 LINK bdevperf 00:04:27.424 CXX test/cpp_headers/init.o 00:04:27.424 CXX test/cpp_headers/ioat.o 00:04:27.424 CXX test/cpp_headers/ioat_spec.o 00:04:27.424 CXX test/cpp_headers/iscsi_spec.o 00:04:27.424 CXX test/cpp_headers/json.o 00:04:27.424 CXX test/cpp_headers/jsonrpc.o 00:04:27.424 CXX test/cpp_headers/keyring.o 00:04:27.424 CXX test/cpp_headers/keyring_module.o 00:04:27.424 CXX test/cpp_headers/likely.o 00:04:27.424 CXX test/cpp_headers/log.o 00:04:27.424 CXX test/cpp_headers/lvol.o 00:04:27.424 CXX test/cpp_headers/md5.o 00:04:27.682 CXX test/cpp_headers/memory.o 00:04:27.682 CXX test/cpp_headers/mmio.o 00:04:27.682 CXX test/cpp_headers/nbd.o 00:04:27.682 CXX test/cpp_headers/net.o 00:04:27.682 CXX test/cpp_headers/notify.o 00:04:27.682 CXX test/cpp_headers/nvme.o 00:04:27.682 CC examples/nvmf/nvmf/nvmf.o 00:04:27.682 CXX test/cpp_headers/nvme_intel.o 00:04:27.682 CXX test/cpp_headers/nvme_ocssd.o 00:04:27.682 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:27.941 CXX test/cpp_headers/nvme_spec.o 00:04:27.941 CXX test/cpp_headers/nvme_zns.o 00:04:27.941 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.941 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.941 CXX test/cpp_headers/nvmf.o 00:04:27.941 CXX test/cpp_headers/nvmf_spec.o 00:04:27.941 CXX test/cpp_headers/nvmf_transport.o 00:04:27.941 LINK nvmf 00:04:27.941 CXX test/cpp_headers/opal.o 00:04:27.941 CXX test/cpp_headers/opal_spec.o 00:04:27.941 CXX test/cpp_headers/pci_ids.o 00:04:27.941 CXX test/cpp_headers/pipe.o 00:04:28.199 CXX test/cpp_headers/queue.o 00:04:28.199 CXX test/cpp_headers/reduce.o 00:04:28.199 CXX test/cpp_headers/rpc.o 00:04:28.199 CXX test/cpp_headers/scheduler.o 00:04:28.199 CXX test/cpp_headers/scsi.o 00:04:28.199 CXX test/cpp_headers/scsi_spec.o 00:04:28.199 CXX test/cpp_headers/sock.o 00:04:28.199 CXX test/cpp_headers/stdinc.o 00:04:28.199 CXX test/cpp_headers/string.o 00:04:28.199 CXX test/cpp_headers/thread.o 00:04:28.458 LINK cuse 00:04:28.458 CXX test/cpp_headers/trace.o 00:04:28.458 CXX test/cpp_headers/trace_parser.o 00:04:28.458 CXX test/cpp_headers/tree.o 00:04:28.458 CXX test/cpp_headers/ublk.o 00:04:28.458 CXX test/cpp_headers/util.o 00:04:28.458 CXX test/cpp_headers/uuid.o 00:04:28.458 CXX test/cpp_headers/version.o 00:04:28.458 CXX test/cpp_headers/vfio_user_pci.o 00:04:28.458 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.458 CXX test/cpp_headers/vhost.o 00:04:28.458 CXX test/cpp_headers/vmd.o 00:04:28.458 CXX test/cpp_headers/xor.o 00:04:28.458 CXX test/cpp_headers/zipf.o 00:04:29.025 LINK esnap 00:04:29.593 00:04:29.593 real 1m30.614s 00:04:29.593 user 8m8.123s 00:04:29.593 sys 1m42.310s 00:04:29.593 12:51:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:29.593 12:51:00 make -- common/autotest_common.sh@10 -- $ set +x 00:04:29.593 ************************************ 00:04:29.593 END TEST make 00:04:29.593 ************************************ 00:04:29.593 12:51:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:29.593 12:51:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:29.593 12:51:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:29.593 12:51:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.593 12:51:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:29.593 12:51:00 -- pm/common@44 -- $ pid=5246 00:04:29.593 12:51:00 -- pm/common@50 -- $ kill -TERM 5246 00:04:29.593 12:51:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.593 12:51:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:29.593 12:51:00 -- pm/common@44 -- $ pid=5248 00:04:29.593 12:51:00 -- pm/common@50 -- $ kill -TERM 5248 00:04:29.593 12:51:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:29.593 12:51:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:29.593 12:51:00 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.593 12:51:00 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.593 12:51:00 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.593 12:51:01 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.593 12:51:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.593 12:51:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.593 12:51:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.593 12:51:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.593 12:51:01 -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.593 12:51:01 -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.593 12:51:01 -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.593 12:51:01 -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.593 12:51:01 -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.593 12:51:01 -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.593 12:51:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.593 12:51:01 -- scripts/common.sh@344 -- # case "$op" in 00:04:29.593 12:51:01 -- scripts/common.sh@345 -- # : 1 00:04:29.593 12:51:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.593 12:51:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.593 12:51:01 -- scripts/common.sh@365 -- # decimal 1 00:04:29.593 12:51:01 -- scripts/common.sh@353 -- # local d=1 00:04:29.593 12:51:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.593 12:51:01 -- scripts/common.sh@355 -- # echo 1 00:04:29.593 12:51:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.593 12:51:01 -- scripts/common.sh@366 -- # decimal 2 00:04:29.593 12:51:01 -- scripts/common.sh@353 -- # local d=2 00:04:29.593 12:51:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.593 12:51:01 -- scripts/common.sh@355 -- # echo 2 00:04:29.593 12:51:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.593 12:51:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.593 12:51:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.593 12:51:01 -- scripts/common.sh@368 -- # return 0 00:04:29.593 12:51:01 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.593 12:51:01 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.593 --rc genhtml_branch_coverage=1 00:04:29.593 --rc genhtml_function_coverage=1 00:04:29.593 --rc genhtml_legend=1 00:04:29.593 --rc geninfo_all_blocks=1 00:04:29.593 --rc geninfo_unexecuted_blocks=1 00:04:29.593 00:04:29.593 ' 00:04:29.593 12:51:01 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.593 --rc genhtml_branch_coverage=1 00:04:29.593 --rc genhtml_function_coverage=1 00:04:29.593 --rc genhtml_legend=1 00:04:29.593 --rc geninfo_all_blocks=1 00:04:29.593 --rc geninfo_unexecuted_blocks=1 00:04:29.593 00:04:29.593 ' 00:04:29.593 12:51:01 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.593 --rc genhtml_branch_coverage=1 00:04:29.593 --rc genhtml_function_coverage=1 00:04:29.593 --rc genhtml_legend=1 00:04:29.593 --rc geninfo_all_blocks=1 00:04:29.593 --rc geninfo_unexecuted_blocks=1 00:04:29.593 00:04:29.593 ' 00:04:29.593 12:51:01 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.593 --rc genhtml_branch_coverage=1 00:04:29.593 --rc genhtml_function_coverage=1 00:04:29.593 --rc genhtml_legend=1 00:04:29.593 --rc geninfo_all_blocks=1 00:04:29.593 --rc geninfo_unexecuted_blocks=1 00:04:29.593 00:04:29.593 ' 00:04:29.593 12:51:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.593 12:51:01 -- nvmf/common.sh@7 -- # uname -s 00:04:29.593 12:51:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.593 12:51:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.593 12:51:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.593 12:51:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.593 12:51:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.593 12:51:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.593 12:51:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.593 12:51:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.593 12:51:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.594 12:51:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.594 12:51:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:04:29.594 12:51:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:04:29.594 12:51:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.594 12:51:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.594 12:51:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:29.594 12:51:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.594 12:51:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.594 12:51:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:29.594 12:51:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.594 12:51:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.594 12:51:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.594 12:51:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.594 12:51:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.594 12:51:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.594 12:51:01 -- paths/export.sh@5 -- # export PATH 00:04:29.594 12:51:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.594 12:51:01 -- nvmf/common.sh@51 -- # : 0 00:04:29.594 12:51:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:29.594 12:51:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:29.594 12:51:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.594 12:51:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.594 12:51:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.594 12:51:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:29.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:29.594 12:51:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:29.594 12:51:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:29.594 12:51:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:29.594 12:51:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:29.594 12:51:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:29.594 12:51:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:29.594 12:51:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:29.594 12:51:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.594 12:51:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:29.594 12:51:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.594 12:51:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:29.852 12:51:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:29.852 12:51:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:29.852 12:51:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54362 00:04:29.852 12:51:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:29.852 12:51:01 -- pm/common@17 -- # local monitor 00:04:29.852 12:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.852 12:51:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:29.852 12:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.852 12:51:01 -- pm/common@25 -- # sleep 1 00:04:29.852 12:51:01 -- pm/common@21 -- # date +%s 00:04:29.852 12:51:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732884661 00:04:29.852 12:51:01 -- pm/common@21 -- # date +%s 00:04:29.852 12:51:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732884661 00:04:29.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732884661_collect-vmstat.pm.log 00:04:29.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732884661_collect-cpu-load.pm.log 00:04:30.788 12:51:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:30.788 12:51:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:30.788 12:51:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.788 12:51:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.788 12:51:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:30.788 12:51:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:30.788 12:51:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.788 12:51:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:30.788 12:51:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:30.788 12:51:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:30.788 12:51:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:30.788 12:51:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:30.788 12:51:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:30.788 12:51:02 -- common/autotest_common.sh@1457 -- # uname 00:04:30.788 12:51:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:30.788 12:51:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:30.788 12:51:02 -- common/autotest_common.sh@1477 -- # uname 00:04:30.788 12:51:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:30.788 12:51:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:30.788 12:51:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:30.788 lcov: LCOV version 1.15 00:04:30.788 12:51:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:45.689 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.689 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.783 12:51:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.783 12:51:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.783 12:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:03.783 12:51:32 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.783 12:51:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.783 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:03.783 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:03.783 12:51:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:03.783 12:51:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:03.783 12:51:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:03.783 12:51:33 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:03.783 12:51:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:03.783 12:51:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:03.783 12:51:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:03.783 12:51:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:03.783 12:51:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:03.783 12:51:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:03.783 12:51:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:03.783 12:51:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:03.783 12:51:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:03.783 12:51:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:03.783 12:51:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:03.783 12:51:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:03.783 12:51:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:03.783 12:51:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:03.783 12:51:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:03.783 12:51:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.783 12:51:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.783 12:51:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:03.783 12:51:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:03.783 12:51:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:03.783 No valid GPT data, bailing 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # pt= 00:05:03.783 12:51:33 -- scripts/common.sh@395 -- # return 1 00:05:03.783 12:51:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:03.783 1+0 records in 00:05:03.783 1+0 records out 00:05:03.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488123 s, 215 MB/s 00:05:03.783 12:51:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.783 12:51:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.783 12:51:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:03.783 12:51:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:03.783 12:51:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:03.783 No valid GPT data, bailing 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # pt= 00:05:03.783 12:51:33 -- scripts/common.sh@395 -- # return 1 00:05:03.783 12:51:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:03.783 1+0 records in 00:05:03.783 1+0 records out 00:05:03.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486255 s, 216 MB/s 00:05:03.783 12:51:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.783 12:51:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.783 12:51:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:03.783 12:51:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:03.783 12:51:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:03.783 No valid GPT data, bailing 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # pt= 00:05:03.783 12:51:33 -- scripts/common.sh@395 -- # return 1 00:05:03.783 12:51:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:03.783 1+0 records in 00:05:03.783 1+0 records out 00:05:03.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518833 s, 202 MB/s 00:05:03.783 12:51:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.783 12:51:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.783 12:51:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:03.783 12:51:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:03.783 12:51:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:03.783 No valid GPT data, bailing 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:03.783 12:51:33 -- scripts/common.sh@394 -- # pt= 00:05:03.783 12:51:33 -- scripts/common.sh@395 -- # return 1 00:05:03.783 12:51:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:03.783 1+0 records in 00:05:03.783 1+0 records out 00:05:03.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508028 s, 206 MB/s 00:05:03.783 12:51:33 -- spdk/autotest.sh@105 -- # sync 00:05:03.783 12:51:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:03.783 12:51:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:03.783 12:51:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.720 12:51:36 -- spdk/autotest.sh@111 -- # uname -s 00:05:04.720 12:51:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:04.720 12:51:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:04.720 12:51:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:05.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.289 Hugepages 00:05:05.289 node hugesize free / total 00:05:05.289 node0 1048576kB 0 / 0 00:05:05.289 node0 2048kB 0 / 0 00:05:05.289 00:05:05.289 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.549 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:05.549 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:05.549 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:05.549 12:51:37 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.549 12:51:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.549 12:51:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.549 12:51:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.487 12:51:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:07.867 12:51:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:07.867 12:51:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:07.867 12:51:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.867 12:51:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:07.867 12:51:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:07.867 12:51:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:07.867 12:51:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.867 12:51:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.867 12:51:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:07.867 12:51:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:07.867 12:51:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:07.867 12:51:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.127 Waiting for block devices as requested 00:05:08.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.127 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.388 12:51:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:08.388 12:51:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:08.388 12:51:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:08.388 12:51:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:08.388 12:51:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1543 -- # continue 00:05:08.388 12:51:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:08.388 12:51:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:08.388 12:51:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:08.388 12:51:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:08.388 12:51:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:08.388 12:51:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:08.388 12:51:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:08.388 12:51:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:08.388 12:51:39 -- common/autotest_common.sh@1543 -- # continue 00:05:08.388 12:51:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:08.388 12:51:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.388 12:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:08.388 12:51:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:08.388 12:51:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.388 12:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:08.388 12:51:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.329 12:51:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:09.329 12:51:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.329 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.329 12:51:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:09.329 12:51:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:09.329 12:51:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.329 12:51:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:09.329 12:51:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:09.329 12:51:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:09.329 12:51:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:09.329 12:51:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:09.329 12:51:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:09.329 12:51:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:09.329 12:51:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.329 12:51:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.329 12:51:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:09.329 12:51:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:09.329 12:51:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:09.329 12:51:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:09.329 12:51:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:09.329 12:51:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:09.329 12:51:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:09.329 12:51:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:09.329 12:51:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:09.329 12:51:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:09.329 12:51:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:09.329 12:51:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:09.329 12:51:40 -- common/autotest_common.sh@1572 -- # return 0 00:05:09.329 12:51:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:09.329 12:51:40 -- common/autotest_common.sh@1580 -- # return 0 00:05:09.329 12:51:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:09.588 12:51:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:09.588 12:51:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:09.588 12:51:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:09.588 12:51:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:09.588 12:51:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.588 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.588 12:51:40 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:09.588 12:51:40 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:09.588 12:51:40 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:09.588 12:51:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:09.588 12:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.588 12:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.588 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.588 ************************************ 00:05:09.588 START TEST env 00:05:09.588 ************************************ 00:05:09.588 12:51:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:09.588 * Looking for test storage... 00:05:09.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:09.588 12:51:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.588 12:51:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.588 12:51:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.588 12:51:41 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.589 12:51:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.589 12:51:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.589 12:51:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.589 12:51:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.589 12:51:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.589 12:51:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.589 12:51:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.589 12:51:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.589 12:51:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.589 12:51:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.589 12:51:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.589 12:51:41 env -- scripts/common.sh@344 -- # case "$op" in 00:05:09.589 12:51:41 env -- scripts/common.sh@345 -- # : 1 00:05:09.589 12:51:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.589 12:51:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.589 12:51:41 env -- scripts/common.sh@365 -- # decimal 1 00:05:09.589 12:51:41 env -- scripts/common.sh@353 -- # local d=1 00:05:09.589 12:51:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.589 12:51:41 env -- scripts/common.sh@355 -- # echo 1 00:05:09.589 12:51:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.589 12:51:41 env -- scripts/common.sh@366 -- # decimal 2 00:05:09.589 12:51:41 env -- scripts/common.sh@353 -- # local d=2 00:05:09.589 12:51:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.589 12:51:41 env -- scripts/common.sh@355 -- # echo 2 00:05:09.589 12:51:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.589 12:51:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.589 12:51:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.589 12:51:41 env -- scripts/common.sh@368 -- # return 0 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.589 --rc genhtml_branch_coverage=1 00:05:09.589 --rc genhtml_function_coverage=1 00:05:09.589 --rc genhtml_legend=1 00:05:09.589 --rc geninfo_all_blocks=1 00:05:09.589 --rc geninfo_unexecuted_blocks=1 00:05:09.589 00:05:09.589 ' 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.589 --rc genhtml_branch_coverage=1 00:05:09.589 --rc genhtml_function_coverage=1 00:05:09.589 --rc genhtml_legend=1 00:05:09.589 --rc geninfo_all_blocks=1 00:05:09.589 --rc geninfo_unexecuted_blocks=1 00:05:09.589 00:05:09.589 ' 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.589 --rc genhtml_branch_coverage=1 00:05:09.589 --rc genhtml_function_coverage=1 00:05:09.589 --rc genhtml_legend=1 00:05:09.589 --rc geninfo_all_blocks=1 00:05:09.589 --rc geninfo_unexecuted_blocks=1 00:05:09.589 00:05:09.589 ' 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.589 --rc genhtml_branch_coverage=1 00:05:09.589 --rc genhtml_function_coverage=1 00:05:09.589 --rc genhtml_legend=1 00:05:09.589 --rc geninfo_all_blocks=1 00:05:09.589 --rc geninfo_unexecuted_blocks=1 00:05:09.589 00:05:09.589 ' 00:05:09.589 12:51:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.589 12:51:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.589 12:51:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.589 ************************************ 00:05:09.589 START TEST env_memory 00:05:09.589 ************************************ 00:05:09.589 12:51:41 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.589 00:05:09.589 00:05:09.589 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.589 http://cunit.sourceforge.net/ 00:05:09.589 00:05:09.589 00:05:09.589 Suite: memory 00:05:09.847 Test: alloc and free memory map ...[2024-11-29 12:51:41.128925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.847 passed 00:05:09.847 Test: mem map translation ...[2024-11-29 12:51:41.160306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.847 [2024-11-29 12:51:41.160376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.847 [2024-11-29 12:51:41.160435] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.847 [2024-11-29 12:51:41.160447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.847 passed 00:05:09.847 Test: mem map registration ...[2024-11-29 12:51:41.225213] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:09.847 [2024-11-29 12:51:41.225278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:09.847 passed 00:05:09.847 Test: mem map adjacent registrations ...passed 00:05:09.847 00:05:09.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.847 suites 1 1 n/a 0 0 00:05:09.847 tests 4 4 4 0 0 00:05:09.847 asserts 152 152 152 0 n/a 00:05:09.847 00:05:09.847 Elapsed time = 0.216 seconds 00:05:09.847 00:05:09.847 real 0m0.236s 00:05:09.847 user 0m0.218s 00:05:09.847 sys 0m0.013s 00:05:09.847 12:51:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.847 12:51:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:09.848 ************************************ 00:05:09.848 END TEST env_memory 00:05:09.848 ************************************ 00:05:10.106 12:51:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:10.106 12:51:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.106 12:51:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.106 12:51:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.106 ************************************ 00:05:10.106 START TEST env_vtophys 00:05:10.106 ************************************ 00:05:10.106 12:51:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:10.106 EAL: lib.eal log level changed from notice to debug 00:05:10.106 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 1 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 2 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 3 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 4 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 5 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 6 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 7 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 8 as core 0 on socket 0 00:05:10.106 EAL: Detected lcore 9 as core 0 on socket 0 00:05:10.106 EAL: Maximum logical cores by configuration: 128 00:05:10.106 EAL: Detected CPU lcores: 10 00:05:10.106 EAL: Detected NUMA nodes: 1 00:05:10.106 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.106 EAL: Detected shared linkage of DPDK 00:05:10.106 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.106 EAL: Selected IOVA mode 'PA' 00:05:10.106 EAL: Probing VFIO support... 00:05:10.106 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:10.106 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:10.106 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.106 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.106 EAL: Setting up physically contiguous memory... 00:05:10.106 EAL: Setting maximum number of open files to 524288 00:05:10.106 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.106 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.106 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.106 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.106 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.106 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.106 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.106 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.106 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.106 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.106 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.106 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.106 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.106 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.106 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.106 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.106 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.106 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.106 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.106 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.106 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.106 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.106 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.106 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.106 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.106 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.106 EAL: Hugepages will be freed exactly as allocated. 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: TSC frequency is ~2200000 KHz 00:05:10.106 EAL: Main lcore 0 is ready (tid=7fbe935eaa00;cpuset=[0]) 00:05:10.106 EAL: Trying to obtain current memory policy. 00:05:10.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.106 EAL: Restoring previous memory policy: 0 00:05:10.106 EAL: request: mp_malloc_sync 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.106 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:10.106 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.106 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.106 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:10.106 00:05:10.106 00:05:10.106 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.106 http://cunit.sourceforge.net/ 00:05:10.106 00:05:10.106 00:05:10.106 Suite: components_suite 00:05:10.106 Test: vtophys_malloc_test ...passed 00:05:10.106 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.106 EAL: Restoring previous memory policy: 4 00:05:10.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.106 EAL: request: mp_malloc_sync 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.106 EAL: request: mp_malloc_sync 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.106 EAL: Trying to obtain current memory policy. 00:05:10.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.106 EAL: Restoring previous memory policy: 4 00:05:10.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.106 EAL: request: mp_malloc_sync 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.106 EAL: request: mp_malloc_sync 00:05:10.106 EAL: No shared files mode enabled, IPC is disabled 00:05:10.106 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.106 EAL: Trying to obtain current memory policy. 00:05:10.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.107 EAL: Restoring previous memory policy: 4 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.107 EAL: Trying to obtain current memory policy. 00:05:10.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.107 EAL: Restoring previous memory policy: 4 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.107 EAL: Trying to obtain current memory policy. 00:05:10.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.107 EAL: Restoring previous memory policy: 4 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.107 EAL: Trying to obtain current memory policy. 00:05:10.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.107 EAL: Restoring previous memory policy: 4 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.107 EAL: request: mp_malloc_sync 00:05:10.107 EAL: No shared files mode enabled, IPC is disabled 00:05:10.107 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.365 EAL: request: mp_malloc_sync 00:05:10.365 EAL: No shared files mode enabled, IPC is disabled 00:05:10.365 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.365 EAL: Trying to obtain current memory policy. 00:05:10.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.365 EAL: Restoring previous memory policy: 4 00:05:10.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.365 EAL: request: mp_malloc_sync 00:05:10.365 EAL: No shared files mode enabled, IPC is disabled 00:05:10.365 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.365 EAL: request: mp_malloc_sync 00:05:10.365 EAL: No shared files mode enabled, IPC is disabled 00:05:10.365 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.365 EAL: Trying to obtain current memory policy. 00:05:10.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.365 EAL: Restoring previous memory policy: 4 00:05:10.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.365 EAL: request: mp_malloc_sync 00:05:10.365 EAL: No shared files mode enabled, IPC is disabled 00:05:10.365 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.624 EAL: request: mp_malloc_sync 00:05:10.624 EAL: No shared files mode enabled, IPC is disabled 00:05:10.624 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.624 EAL: Trying to obtain current memory policy. 00:05:10.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.883 EAL: Restoring previous memory policy: 4 00:05:10.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.883 EAL: request: mp_malloc_sync 00:05:10.883 EAL: No shared files mode enabled, IPC is disabled 00:05:10.883 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.142 EAL: request: mp_malloc_sync 00:05:11.142 EAL: No shared files mode enabled, IPC is disabled 00:05:11.142 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.142 EAL: Trying to obtain current memory policy. 00:05:11.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.401 EAL: Restoring previous memory policy: 4 00:05:11.401 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.401 EAL: request: mp_malloc_sync 00:05:11.401 EAL: No shared files mode enabled, IPC is disabled 00:05:11.401 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.919 passed 00:05:11.919 00:05:11.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.919 suites 1 1 n/a 0 0 00:05:11.919 tests 2 2 2 0 0 00:05:11.919 asserts 5463 5463 5463 0 n/a 00:05:11.919 00:05:11.919 Elapsed time = 1.789 seconds 00:05:11.919 EAL: request: mp_malloc_sync 00:05:11.919 EAL: No shared files mode enabled, IPC is disabled 00:05:11.919 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.919 EAL: request: mp_malloc_sync 00:05:11.919 EAL: No shared files mode enabled, IPC is disabled 00:05:11.919 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.919 EAL: No shared files mode enabled, IPC is disabled 00:05:11.919 EAL: No shared files mode enabled, IPC is disabled 00:05:11.919 EAL: No shared files mode enabled, IPC is disabled 00:05:11.919 00:05:11.919 real 0m2.010s 00:05:11.919 user 0m1.154s 00:05:11.919 sys 0m0.723s 00:05:11.919 12:51:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.919 ************************************ 00:05:11.919 END TEST env_vtophys 00:05:11.919 ************************************ 00:05:11.919 12:51:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.178 12:51:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.178 12:51:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.178 12:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.178 12:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.178 ************************************ 00:05:12.178 START TEST env_pci 00:05:12.178 ************************************ 00:05:12.178 12:51:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.178 00:05:12.178 00:05:12.178 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.178 http://cunit.sourceforge.net/ 00:05:12.178 00:05:12.178 00:05:12.178 Suite: pci 00:05:12.178 Test: pci_hook ...[2024-11-29 12:51:43.461604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56591 has claimed it 00:05:12.178 passed 00:05:12.178 00:05:12.178 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.178 suites 1 1 n/a 0 0 00:05:12.179 tests 1 1 1 0 0 00:05:12.179 asserts 25 25 25 0 n/a 00:05:12.179 00:05:12.179 Elapsed time = 0.003 seconds 00:05:12.179 EAL: Cannot find device (10000:00:01.0) 00:05:12.179 EAL: Failed to attach device on primary process 00:05:12.179 00:05:12.179 real 0m0.024s 00:05:12.179 user 0m0.009s 00:05:12.179 sys 0m0.014s 00:05:12.179 12:51:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.179 12:51:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.179 ************************************ 00:05:12.179 END TEST env_pci 00:05:12.179 ************************************ 00:05:12.179 12:51:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.179 12:51:43 env -- env/env.sh@15 -- # uname 00:05:12.179 12:51:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.179 12:51:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.179 12:51:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.179 12:51:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:12.179 12:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.179 12:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.179 ************************************ 00:05:12.179 START TEST env_dpdk_post_init 00:05:12.179 ************************************ 00:05:12.179 12:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.179 EAL: Detected CPU lcores: 10 00:05:12.179 EAL: Detected NUMA nodes: 1 00:05:12.179 EAL: Detected shared linkage of DPDK 00:05:12.179 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.179 EAL: Selected IOVA mode 'PA' 00:05:12.179 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.469 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.469 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.469 Starting DPDK initialization... 00:05:12.469 Starting SPDK post initialization... 00:05:12.469 SPDK NVMe probe 00:05:12.469 Attaching to 0000:00:10.0 00:05:12.469 Attaching to 0000:00:11.0 00:05:12.469 Attached to 0000:00:10.0 00:05:12.469 Attached to 0000:00:11.0 00:05:12.469 Cleaning up... 00:05:12.469 00:05:12.469 real 0m0.199s 00:05:12.469 user 0m0.060s 00:05:12.469 sys 0m0.037s 00:05:12.469 ************************************ 00:05:12.469 END TEST env_dpdk_post_init 00:05:12.469 ************************************ 00:05:12.469 12:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.469 12:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.469 12:51:43 env -- env/env.sh@26 -- # uname 00:05:12.469 12:51:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.469 12:51:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.469 12:51:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.469 12:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.469 12:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.469 ************************************ 00:05:12.469 START TEST env_mem_callbacks 00:05:12.469 ************************************ 00:05:12.469 12:51:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.469 EAL: Detected CPU lcores: 10 00:05:12.469 EAL: Detected NUMA nodes: 1 00:05:12.469 EAL: Detected shared linkage of DPDK 00:05:12.469 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.469 EAL: Selected IOVA mode 'PA' 00:05:12.469 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.469 00:05:12.469 00:05:12.469 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.469 http://cunit.sourceforge.net/ 00:05:12.469 00:05:12.469 00:05:12.469 Suite: memory 00:05:12.469 Test: test ... 00:05:12.469 register 0x200000200000 2097152 00:05:12.469 malloc 3145728 00:05:12.469 register 0x200000400000 4194304 00:05:12.469 buf 0x200000500000 len 3145728 PASSED 00:05:12.469 malloc 64 00:05:12.469 buf 0x2000004fff40 len 64 PASSED 00:05:12.469 malloc 4194304 00:05:12.469 register 0x200000800000 6291456 00:05:12.469 buf 0x200000a00000 len 4194304 PASSED 00:05:12.469 free 0x200000500000 3145728 00:05:12.469 free 0x2000004fff40 64 00:05:12.469 unregister 0x200000400000 4194304 PASSED 00:05:12.469 free 0x200000a00000 4194304 00:05:12.469 unregister 0x200000800000 6291456 PASSED 00:05:12.469 malloc 8388608 00:05:12.469 register 0x200000400000 10485760 00:05:12.469 buf 0x200000600000 len 8388608 PASSED 00:05:12.469 free 0x200000600000 8388608 00:05:12.469 unregister 0x200000400000 10485760 PASSED 00:05:12.469 passed 00:05:12.469 00:05:12.469 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.469 suites 1 1 n/a 0 0 00:05:12.469 tests 1 1 1 0 0 00:05:12.469 asserts 15 15 15 0 n/a 00:05:12.469 00:05:12.469 Elapsed time = 0.010 seconds 00:05:12.469 00:05:12.469 real 0m0.151s 00:05:12.469 user 0m0.022s 00:05:12.469 sys 0m0.025s 00:05:12.469 12:51:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.469 ************************************ 00:05:12.469 END TEST env_mem_callbacks 00:05:12.469 ************************************ 00:05:12.469 12:51:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.751 ************************************ 00:05:12.751 END TEST env 00:05:12.751 ************************************ 00:05:12.751 00:05:12.751 real 0m3.141s 00:05:12.751 user 0m1.679s 00:05:12.751 sys 0m1.098s 00:05:12.751 12:51:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.751 12:51:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.751 12:51:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.751 12:51:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.751 12:51:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.751 12:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:12.751 ************************************ 00:05:12.751 START TEST rpc 00:05:12.751 ************************************ 00:05:12.751 12:51:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.751 * Looking for test storage... 00:05:12.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.751 12:51:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.751 12:51:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.751 12:51:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.751 12:51:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.751 12:51:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.751 12:51:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.751 12:51:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.751 12:51:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.751 12:51:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.751 12:51:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.751 12:51:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.751 12:51:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.751 12:51:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.751 12:51:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.751 12:51:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.751 12:51:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.751 12:51:44 rpc -- scripts/common.sh@345 -- # : 1 00:05:12.751 12:51:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.751 12:51:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.751 12:51:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.751 12:51:44 rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.751 12:51:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.751 12:51:44 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.011 12:51:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.011 12:51:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.011 12:51:44 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.011 12:51:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.011 12:51:44 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.011 12:51:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.011 12:51:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.011 12:51:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.011 12:51:44 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.011 --rc genhtml_branch_coverage=1 00:05:13.011 --rc genhtml_function_coverage=1 00:05:13.011 --rc genhtml_legend=1 00:05:13.011 --rc geninfo_all_blocks=1 00:05:13.011 --rc geninfo_unexecuted_blocks=1 00:05:13.011 00:05:13.011 ' 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.011 --rc genhtml_branch_coverage=1 00:05:13.011 --rc genhtml_function_coverage=1 00:05:13.011 --rc genhtml_legend=1 00:05:13.011 --rc geninfo_all_blocks=1 00:05:13.011 --rc geninfo_unexecuted_blocks=1 00:05:13.011 00:05:13.011 ' 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.011 --rc genhtml_branch_coverage=1 00:05:13.011 --rc genhtml_function_coverage=1 00:05:13.011 --rc genhtml_legend=1 00:05:13.011 --rc geninfo_all_blocks=1 00:05:13.011 --rc geninfo_unexecuted_blocks=1 00:05:13.011 00:05:13.011 ' 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.011 --rc genhtml_branch_coverage=1 00:05:13.011 --rc genhtml_function_coverage=1 00:05:13.011 --rc genhtml_legend=1 00:05:13.011 --rc geninfo_all_blocks=1 00:05:13.011 --rc geninfo_unexecuted_blocks=1 00:05:13.011 00:05:13.011 ' 00:05:13.011 12:51:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56714 00:05:13.011 12:51:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.011 12:51:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.011 12:51:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56714 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 56714 ']' 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.011 12:51:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.011 [2024-11-29 12:51:44.351548] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:13.011 [2024-11-29 12:51:44.352285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56714 ] 00:05:13.011 [2024-11-29 12:51:44.506549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.271 [2024-11-29 12:51:44.585668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.271 [2024-11-29 12:51:44.586081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56714' to capture a snapshot of events at runtime. 00:05:13.271 [2024-11-29 12:51:44.586223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.271 [2024-11-29 12:51:44.586240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.271 [2024-11-29 12:51:44.586249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56714 for offline analysis/debug. 00:05:13.271 [2024-11-29 12:51:44.586770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.271 [2024-11-29 12:51:44.677642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.530 12:51:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.530 12:51:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.530 12:51:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.530 12:51:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.530 12:51:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.530 12:51:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.530 12:51:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.530 12:51:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.530 12:51:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.530 ************************************ 00:05:13.530 START TEST rpc_integrity 00:05:13.530 ************************************ 00:05:13.530 12:51:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.530 12:51:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.530 12:51:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.530 12:51:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.530 12:51:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.530 12:51:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.530 12:51:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.530 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.530 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.530 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.530 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.530 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.530 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.530 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.530 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.530 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.789 { 00:05:13.789 "name": "Malloc0", 00:05:13.789 "aliases": [ 00:05:13.789 "012157fa-1e10-4adc-b82b-bf0f7eace40e" 00:05:13.789 ], 00:05:13.789 "product_name": "Malloc disk", 00:05:13.789 "block_size": 512, 00:05:13.789 "num_blocks": 16384, 00:05:13.789 "uuid": "012157fa-1e10-4adc-b82b-bf0f7eace40e", 00:05:13.789 "assigned_rate_limits": { 00:05:13.789 "rw_ios_per_sec": 0, 00:05:13.789 "rw_mbytes_per_sec": 0, 00:05:13.789 "r_mbytes_per_sec": 0, 00:05:13.789 "w_mbytes_per_sec": 0 00:05:13.789 }, 00:05:13.789 "claimed": false, 00:05:13.789 "zoned": false, 00:05:13.789 "supported_io_types": { 00:05:13.789 "read": true, 00:05:13.789 "write": true, 00:05:13.789 "unmap": true, 00:05:13.789 "flush": true, 00:05:13.789 "reset": true, 00:05:13.789 "nvme_admin": false, 00:05:13.789 "nvme_io": false, 00:05:13.789 "nvme_io_md": false, 00:05:13.789 "write_zeroes": true, 00:05:13.789 "zcopy": true, 00:05:13.789 "get_zone_info": false, 00:05:13.789 "zone_management": false, 00:05:13.789 "zone_append": false, 00:05:13.789 "compare": false, 00:05:13.789 "compare_and_write": false, 00:05:13.789 "abort": true, 00:05:13.789 "seek_hole": false, 00:05:13.789 "seek_data": false, 00:05:13.789 "copy": true, 00:05:13.789 "nvme_iov_md": false 00:05:13.789 }, 00:05:13.789 "memory_domains": [ 00:05:13.789 { 00:05:13.789 "dma_device_id": "system", 00:05:13.789 "dma_device_type": 1 00:05:13.789 }, 00:05:13.789 { 00:05:13.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.789 "dma_device_type": 2 00:05:13.789 } 00:05:13.789 ], 00:05:13.789 "driver_specific": {} 00:05:13.789 } 00:05:13.789 ]' 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.789 [2024-11-29 12:51:45.111172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.789 [2024-11-29 12:51:45.111243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.789 [2024-11-29 12:51:45.111309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14f3050 00:05:13.789 [2024-11-29 12:51:45.111325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.789 [2024-11-29 12:51:45.112947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.789 [2024-11-29 12:51:45.112987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.789 Passthru0 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.789 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.789 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.789 { 00:05:13.789 "name": "Malloc0", 00:05:13.789 "aliases": [ 00:05:13.789 "012157fa-1e10-4adc-b82b-bf0f7eace40e" 00:05:13.789 ], 00:05:13.789 "product_name": "Malloc disk", 00:05:13.789 "block_size": 512, 00:05:13.789 "num_blocks": 16384, 00:05:13.789 "uuid": "012157fa-1e10-4adc-b82b-bf0f7eace40e", 00:05:13.789 "assigned_rate_limits": { 00:05:13.789 "rw_ios_per_sec": 0, 00:05:13.789 "rw_mbytes_per_sec": 0, 00:05:13.789 "r_mbytes_per_sec": 0, 00:05:13.789 "w_mbytes_per_sec": 0 00:05:13.789 }, 00:05:13.789 "claimed": true, 00:05:13.789 "claim_type": "exclusive_write", 00:05:13.789 "zoned": false, 00:05:13.789 "supported_io_types": { 00:05:13.789 "read": true, 00:05:13.789 "write": true, 00:05:13.789 "unmap": true, 00:05:13.789 "flush": true, 00:05:13.789 "reset": true, 00:05:13.789 "nvme_admin": false, 00:05:13.789 "nvme_io": false, 00:05:13.789 "nvme_io_md": false, 00:05:13.789 "write_zeroes": true, 00:05:13.789 "zcopy": true, 00:05:13.789 "get_zone_info": false, 00:05:13.789 "zone_management": false, 00:05:13.789 "zone_append": false, 00:05:13.789 "compare": false, 00:05:13.789 "compare_and_write": false, 00:05:13.789 "abort": true, 00:05:13.789 "seek_hole": false, 00:05:13.789 "seek_data": false, 00:05:13.789 "copy": true, 00:05:13.789 "nvme_iov_md": false 00:05:13.789 }, 00:05:13.789 "memory_domains": [ 00:05:13.789 { 00:05:13.789 "dma_device_id": "system", 00:05:13.789 "dma_device_type": 1 00:05:13.789 }, 00:05:13.789 { 00:05:13.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.789 "dma_device_type": 2 00:05:13.789 } 00:05:13.789 ], 00:05:13.789 "driver_specific": {} 00:05:13.789 }, 00:05:13.789 { 00:05:13.789 "name": "Passthru0", 00:05:13.789 "aliases": [ 00:05:13.789 "3fe1904f-dc95-5bd2-96f0-f34ce5893d08" 00:05:13.789 ], 00:05:13.789 "product_name": "passthru", 00:05:13.789 "block_size": 512, 00:05:13.789 "num_blocks": 16384, 00:05:13.789 "uuid": "3fe1904f-dc95-5bd2-96f0-f34ce5893d08", 00:05:13.789 "assigned_rate_limits": { 00:05:13.789 "rw_ios_per_sec": 0, 00:05:13.789 "rw_mbytes_per_sec": 0, 00:05:13.789 "r_mbytes_per_sec": 0, 00:05:13.789 "w_mbytes_per_sec": 0 00:05:13.790 }, 00:05:13.790 "claimed": false, 00:05:13.790 "zoned": false, 00:05:13.790 "supported_io_types": { 00:05:13.790 "read": true, 00:05:13.790 "write": true, 00:05:13.790 "unmap": true, 00:05:13.790 "flush": true, 00:05:13.790 "reset": true, 00:05:13.790 "nvme_admin": false, 00:05:13.790 "nvme_io": false, 00:05:13.790 "nvme_io_md": false, 00:05:13.790 "write_zeroes": true, 00:05:13.790 "zcopy": true, 00:05:13.790 "get_zone_info": false, 00:05:13.790 "zone_management": false, 00:05:13.790 "zone_append": false, 00:05:13.790 "compare": false, 00:05:13.790 "compare_and_write": false, 00:05:13.790 "abort": true, 00:05:13.790 "seek_hole": false, 00:05:13.790 "seek_data": false, 00:05:13.790 "copy": true, 00:05:13.790 "nvme_iov_md": false 00:05:13.790 }, 00:05:13.790 "memory_domains": [ 00:05:13.790 { 00:05:13.790 "dma_device_id": "system", 00:05:13.790 "dma_device_type": 1 00:05:13.790 }, 00:05:13.790 { 00:05:13.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.790 "dma_device_type": 2 00:05:13.790 } 00:05:13.790 ], 00:05:13.790 "driver_specific": { 00:05:13.790 "passthru": { 00:05:13.790 "name": "Passthru0", 00:05:13.790 "base_bdev_name": "Malloc0" 00:05:13.790 } 00:05:13.790 } 00:05:13.790 } 00:05:13.790 ]' 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.790 ************************************ 00:05:13.790 END TEST rpc_integrity 00:05:13.790 ************************************ 00:05:13.790 12:51:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.790 00:05:13.790 real 0m0.342s 00:05:13.790 user 0m0.232s 00:05:13.790 sys 0m0.039s 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.790 12:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.048 12:51:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.048 12:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.048 12:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.048 12:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.048 ************************************ 00:05:14.048 START TEST rpc_plugins 00:05:14.048 ************************************ 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.049 { 00:05:14.049 "name": "Malloc1", 00:05:14.049 "aliases": [ 00:05:14.049 "31a5d54d-7581-4ff7-aaa1-26eab1365df9" 00:05:14.049 ], 00:05:14.049 "product_name": "Malloc disk", 00:05:14.049 "block_size": 4096, 00:05:14.049 "num_blocks": 256, 00:05:14.049 "uuid": "31a5d54d-7581-4ff7-aaa1-26eab1365df9", 00:05:14.049 "assigned_rate_limits": { 00:05:14.049 "rw_ios_per_sec": 0, 00:05:14.049 "rw_mbytes_per_sec": 0, 00:05:14.049 "r_mbytes_per_sec": 0, 00:05:14.049 "w_mbytes_per_sec": 0 00:05:14.049 }, 00:05:14.049 "claimed": false, 00:05:14.049 "zoned": false, 00:05:14.049 "supported_io_types": { 00:05:14.049 "read": true, 00:05:14.049 "write": true, 00:05:14.049 "unmap": true, 00:05:14.049 "flush": true, 00:05:14.049 "reset": true, 00:05:14.049 "nvme_admin": false, 00:05:14.049 "nvme_io": false, 00:05:14.049 "nvme_io_md": false, 00:05:14.049 "write_zeroes": true, 00:05:14.049 "zcopy": true, 00:05:14.049 "get_zone_info": false, 00:05:14.049 "zone_management": false, 00:05:14.049 "zone_append": false, 00:05:14.049 "compare": false, 00:05:14.049 "compare_and_write": false, 00:05:14.049 "abort": true, 00:05:14.049 "seek_hole": false, 00:05:14.049 "seek_data": false, 00:05:14.049 "copy": true, 00:05:14.049 "nvme_iov_md": false 00:05:14.049 }, 00:05:14.049 "memory_domains": [ 00:05:14.049 { 00:05:14.049 "dma_device_id": "system", 00:05:14.049 "dma_device_type": 1 00:05:14.049 }, 00:05:14.049 { 00:05:14.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.049 "dma_device_type": 2 00:05:14.049 } 00:05:14.049 ], 00:05:14.049 "driver_specific": {} 00:05:14.049 } 00:05:14.049 ]' 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:14.049 ************************************ 00:05:14.049 END TEST rpc_plugins 00:05:14.049 ************************************ 00:05:14.049 12:51:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:14.049 00:05:14.049 real 0m0.166s 00:05:14.049 user 0m0.107s 00:05:14.049 sys 0m0.021s 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.049 12:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.309 12:51:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:14.309 12:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.309 12:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.309 12:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.309 ************************************ 00:05:14.309 START TEST rpc_trace_cmd_test 00:05:14.309 ************************************ 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:14.309 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56714", 00:05:14.309 "tpoint_group_mask": "0x8", 00:05:14.309 "iscsi_conn": { 00:05:14.309 "mask": "0x2", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "scsi": { 00:05:14.309 "mask": "0x4", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "bdev": { 00:05:14.309 "mask": "0x8", 00:05:14.309 "tpoint_mask": "0xffffffffffffffff" 00:05:14.309 }, 00:05:14.309 "nvmf_rdma": { 00:05:14.309 "mask": "0x10", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "nvmf_tcp": { 00:05:14.309 "mask": "0x20", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "ftl": { 00:05:14.309 "mask": "0x40", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "blobfs": { 00:05:14.309 "mask": "0x80", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "dsa": { 00:05:14.309 "mask": "0x200", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "thread": { 00:05:14.309 "mask": "0x400", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "nvme_pcie": { 00:05:14.309 "mask": "0x800", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "iaa": { 00:05:14.309 "mask": "0x1000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "nvme_tcp": { 00:05:14.309 "mask": "0x2000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "bdev_nvme": { 00:05:14.309 "mask": "0x4000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "sock": { 00:05:14.309 "mask": "0x8000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "blob": { 00:05:14.309 "mask": "0x10000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "bdev_raid": { 00:05:14.309 "mask": "0x20000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 }, 00:05:14.309 "scheduler": { 00:05:14.309 "mask": "0x40000", 00:05:14.309 "tpoint_mask": "0x0" 00:05:14.309 } 00:05:14.309 }' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.309 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.569 ************************************ 00:05:14.569 END TEST rpc_trace_cmd_test 00:05:14.569 ************************************ 00:05:14.569 12:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.569 00:05:14.569 real 0m0.290s 00:05:14.569 user 0m0.248s 00:05:14.569 sys 0m0.032s 00:05:14.569 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.569 12:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.569 12:51:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.569 12:51:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.569 12:51:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.569 12:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.569 12:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.569 12:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.569 ************************************ 00:05:14.569 START TEST rpc_daemon_integrity 00:05:14.569 ************************************ 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.569 12:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.569 { 00:05:14.569 "name": "Malloc2", 00:05:14.569 "aliases": [ 00:05:14.569 "5bb568e0-efaf-4ab2-b370-5ec2c8610c07" 00:05:14.569 ], 00:05:14.569 "product_name": "Malloc disk", 00:05:14.569 "block_size": 512, 00:05:14.569 "num_blocks": 16384, 00:05:14.569 "uuid": "5bb568e0-efaf-4ab2-b370-5ec2c8610c07", 00:05:14.569 "assigned_rate_limits": { 00:05:14.569 "rw_ios_per_sec": 0, 00:05:14.569 "rw_mbytes_per_sec": 0, 00:05:14.569 "r_mbytes_per_sec": 0, 00:05:14.569 "w_mbytes_per_sec": 0 00:05:14.569 }, 00:05:14.569 "claimed": false, 00:05:14.569 "zoned": false, 00:05:14.569 "supported_io_types": { 00:05:14.569 "read": true, 00:05:14.569 "write": true, 00:05:14.569 "unmap": true, 00:05:14.569 "flush": true, 00:05:14.569 "reset": true, 00:05:14.569 "nvme_admin": false, 00:05:14.569 "nvme_io": false, 00:05:14.569 "nvme_io_md": false, 00:05:14.569 "write_zeroes": true, 00:05:14.569 "zcopy": true, 00:05:14.569 "get_zone_info": false, 00:05:14.569 "zone_management": false, 00:05:14.569 "zone_append": false, 00:05:14.569 "compare": false, 00:05:14.569 "compare_and_write": false, 00:05:14.569 "abort": true, 00:05:14.569 "seek_hole": false, 00:05:14.569 "seek_data": false, 00:05:14.569 "copy": true, 00:05:14.569 "nvme_iov_md": false 00:05:14.569 }, 00:05:14.569 "memory_domains": [ 00:05:14.569 { 00:05:14.569 "dma_device_id": "system", 00:05:14.569 "dma_device_type": 1 00:05:14.569 }, 00:05:14.569 { 00:05:14.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.569 "dma_device_type": 2 00:05:14.569 } 00:05:14.569 ], 00:05:14.569 "driver_specific": {} 00:05:14.569 } 00:05:14.569 ]' 00:05:14.569 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 [2024-11-29 12:51:46.096828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.829 [2024-11-29 12:51:46.096927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.829 [2024-11-29 12:51:46.096950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14fe030 00:05:14.829 [2024-11-29 12:51:46.096961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.829 [2024-11-29 12:51:46.098242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.829 [2024-11-29 12:51:46.098338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.829 Passthru0 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.829 { 00:05:14.829 "name": "Malloc2", 00:05:14.829 "aliases": [ 00:05:14.829 "5bb568e0-efaf-4ab2-b370-5ec2c8610c07" 00:05:14.829 ], 00:05:14.829 "product_name": "Malloc disk", 00:05:14.829 "block_size": 512, 00:05:14.829 "num_blocks": 16384, 00:05:14.829 "uuid": "5bb568e0-efaf-4ab2-b370-5ec2c8610c07", 00:05:14.829 "assigned_rate_limits": { 00:05:14.829 "rw_ios_per_sec": 0, 00:05:14.829 "rw_mbytes_per_sec": 0, 00:05:14.829 "r_mbytes_per_sec": 0, 00:05:14.829 "w_mbytes_per_sec": 0 00:05:14.829 }, 00:05:14.829 "claimed": true, 00:05:14.829 "claim_type": "exclusive_write", 00:05:14.829 "zoned": false, 00:05:14.829 "supported_io_types": { 00:05:14.829 "read": true, 00:05:14.829 "write": true, 00:05:14.829 "unmap": true, 00:05:14.829 "flush": true, 00:05:14.829 "reset": true, 00:05:14.829 "nvme_admin": false, 00:05:14.829 "nvme_io": false, 00:05:14.829 "nvme_io_md": false, 00:05:14.829 "write_zeroes": true, 00:05:14.829 "zcopy": true, 00:05:14.829 "get_zone_info": false, 00:05:14.829 "zone_management": false, 00:05:14.829 "zone_append": false, 00:05:14.829 "compare": false, 00:05:14.829 "compare_and_write": false, 00:05:14.829 "abort": true, 00:05:14.829 "seek_hole": false, 00:05:14.829 "seek_data": false, 00:05:14.829 "copy": true, 00:05:14.829 "nvme_iov_md": false 00:05:14.829 }, 00:05:14.829 "memory_domains": [ 00:05:14.829 { 00:05:14.829 "dma_device_id": "system", 00:05:14.829 "dma_device_type": 1 00:05:14.829 }, 00:05:14.829 { 00:05:14.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.829 "dma_device_type": 2 00:05:14.829 } 00:05:14.829 ], 00:05:14.829 "driver_specific": {} 00:05:14.829 }, 00:05:14.829 { 00:05:14.829 "name": "Passthru0", 00:05:14.829 "aliases": [ 00:05:14.829 "7e89d3a5-5d34-5d89-9c73-9a08831faf0d" 00:05:14.829 ], 00:05:14.829 "product_name": "passthru", 00:05:14.829 "block_size": 512, 00:05:14.829 "num_blocks": 16384, 00:05:14.829 "uuid": "7e89d3a5-5d34-5d89-9c73-9a08831faf0d", 00:05:14.829 "assigned_rate_limits": { 00:05:14.829 "rw_ios_per_sec": 0, 00:05:14.829 "rw_mbytes_per_sec": 0, 00:05:14.829 "r_mbytes_per_sec": 0, 00:05:14.829 "w_mbytes_per_sec": 0 00:05:14.829 }, 00:05:14.829 "claimed": false, 00:05:14.829 "zoned": false, 00:05:14.829 "supported_io_types": { 00:05:14.829 "read": true, 00:05:14.829 "write": true, 00:05:14.829 "unmap": true, 00:05:14.829 "flush": true, 00:05:14.829 "reset": true, 00:05:14.829 "nvme_admin": false, 00:05:14.829 "nvme_io": false, 00:05:14.829 "nvme_io_md": false, 00:05:14.829 "write_zeroes": true, 00:05:14.829 "zcopy": true, 00:05:14.829 "get_zone_info": false, 00:05:14.829 "zone_management": false, 00:05:14.829 "zone_append": false, 00:05:14.829 "compare": false, 00:05:14.829 "compare_and_write": false, 00:05:14.829 "abort": true, 00:05:14.829 "seek_hole": false, 00:05:14.829 "seek_data": false, 00:05:14.829 "copy": true, 00:05:14.829 "nvme_iov_md": false 00:05:14.829 }, 00:05:14.829 "memory_domains": [ 00:05:14.829 { 00:05:14.829 "dma_device_id": "system", 00:05:14.829 "dma_device_type": 1 00:05:14.829 }, 00:05:14.829 { 00:05:14.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.829 "dma_device_type": 2 00:05:14.829 } 00:05:14.829 ], 00:05:14.829 "driver_specific": { 00:05:14.829 "passthru": { 00:05:14.829 "name": "Passthru0", 00:05:14.829 "base_bdev_name": "Malloc2" 00:05:14.829 } 00:05:14.829 } 00:05:14.829 } 00:05:14.829 ]' 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.829 ************************************ 00:05:14.829 END TEST rpc_daemon_integrity 00:05:14.829 ************************************ 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.829 00:05:14.829 real 0m0.344s 00:05:14.829 user 0m0.230s 00:05:14.829 sys 0m0.044s 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.829 12:51:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.829 12:51:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.829 12:51:46 rpc -- rpc/rpc.sh@84 -- # killprocess 56714 00:05:14.829 12:51:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 56714 ']' 00:05:14.829 12:51:46 rpc -- common/autotest_common.sh@958 -- # kill -0 56714 00:05:14.829 12:51:46 rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.829 12:51:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.830 12:51:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56714 00:05:15.089 killing process with pid 56714 00:05:15.089 12:51:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.089 12:51:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.089 12:51:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56714' 00:05:15.089 12:51:46 rpc -- common/autotest_common.sh@973 -- # kill 56714 00:05:15.089 12:51:46 rpc -- common/autotest_common.sh@978 -- # wait 56714 00:05:15.657 00:05:15.657 real 0m2.837s 00:05:15.657 user 0m3.482s 00:05:15.657 sys 0m0.767s 00:05:15.657 12:51:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.657 ************************************ 00:05:15.657 END TEST rpc 00:05:15.657 ************************************ 00:05:15.657 12:51:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.657 12:51:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.657 12:51:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.657 12:51:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.657 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:15.657 ************************************ 00:05:15.657 START TEST skip_rpc 00:05:15.657 ************************************ 00:05:15.657 12:51:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.657 * Looking for test storage... 00:05:15.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.657 12:51:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.657 --rc genhtml_branch_coverage=1 00:05:15.657 --rc genhtml_function_coverage=1 00:05:15.657 --rc genhtml_legend=1 00:05:15.657 --rc geninfo_all_blocks=1 00:05:15.657 --rc geninfo_unexecuted_blocks=1 00:05:15.657 00:05:15.657 ' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.657 --rc genhtml_branch_coverage=1 00:05:15.657 --rc genhtml_function_coverage=1 00:05:15.657 --rc genhtml_legend=1 00:05:15.657 --rc geninfo_all_blocks=1 00:05:15.657 --rc geninfo_unexecuted_blocks=1 00:05:15.657 00:05:15.657 ' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.657 --rc genhtml_branch_coverage=1 00:05:15.657 --rc genhtml_function_coverage=1 00:05:15.657 --rc genhtml_legend=1 00:05:15.657 --rc geninfo_all_blocks=1 00:05:15.657 --rc geninfo_unexecuted_blocks=1 00:05:15.657 00:05:15.657 ' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.657 --rc genhtml_branch_coverage=1 00:05:15.657 --rc genhtml_function_coverage=1 00:05:15.657 --rc genhtml_legend=1 00:05:15.657 --rc geninfo_all_blocks=1 00:05:15.657 --rc geninfo_unexecuted_blocks=1 00:05:15.657 00:05:15.657 ' 00:05:15.657 12:51:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.657 12:51:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.657 12:51:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.657 12:51:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.657 ************************************ 00:05:15.657 START TEST skip_rpc 00:05:15.657 ************************************ 00:05:15.657 12:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:15.657 12:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56912 00:05:15.657 12:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.657 12:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.657 12:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.916 [2024-11-29 12:51:47.241010] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:15.916 [2024-11-29 12:51:47.241322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56912 ] 00:05:15.916 [2024-11-29 12:51:47.395585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.175 [2024-11-29 12:51:47.471327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.175 [2024-11-29 12:51:47.551320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56912 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56912 ']' 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56912 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56912 00:05:21.451 killing process with pid 56912 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56912' 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56912 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56912 00:05:21.451 00:05:21.451 real 0m5.595s 00:05:21.451 user 0m5.187s 00:05:21.451 sys 0m0.309s 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.451 12:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.451 ************************************ 00:05:21.451 END TEST skip_rpc 00:05:21.451 ************************************ 00:05:21.451 12:51:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.451 12:51:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.451 12:51:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.451 12:51:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.451 ************************************ 00:05:21.451 START TEST skip_rpc_with_json 00:05:21.451 ************************************ 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56999 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56999 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56999 ']' 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.451 12:51:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.452 [2024-11-29 12:51:52.896365] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:21.452 [2024-11-29 12:51:52.896504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56999 ] 00:05:21.711 [2024-11-29 12:51:53.044945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.711 [2024-11-29 12:51:53.101380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.711 [2024-11-29 12:51:53.199802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 [2024-11-29 12:51:53.920373] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.676 request: 00:05:22.676 { 00:05:22.676 "trtype": "tcp", 00:05:22.676 "method": "nvmf_get_transports", 00:05:22.676 "req_id": 1 00:05:22.676 } 00:05:22.676 Got JSON-RPC error response 00:05:22.676 response: 00:05:22.676 { 00:05:22.676 "code": -19, 00:05:22.676 "message": "No such device" 00:05:22.676 } 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 [2024-11-29 12:51:53.932555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 12:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.676 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.676 { 00:05:22.676 "subsystems": [ 00:05:22.676 { 00:05:22.676 "subsystem": "fsdev", 00:05:22.676 "config": [ 00:05:22.676 { 00:05:22.676 "method": "fsdev_set_opts", 00:05:22.677 "params": { 00:05:22.677 "fsdev_io_pool_size": 65535, 00:05:22.677 "fsdev_io_cache_size": 256 00:05:22.677 } 00:05:22.677 } 00:05:22.677 ] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "keyring", 00:05:22.677 "config": [] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "iobuf", 00:05:22.677 "config": [ 00:05:22.677 { 00:05:22.677 "method": "iobuf_set_options", 00:05:22.677 "params": { 00:05:22.677 "small_pool_count": 8192, 00:05:22.677 "large_pool_count": 1024, 00:05:22.677 "small_bufsize": 8192, 00:05:22.677 "large_bufsize": 135168, 00:05:22.677 "enable_numa": false 00:05:22.677 } 00:05:22.677 } 00:05:22.677 ] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "sock", 00:05:22.677 "config": [ 00:05:22.677 { 00:05:22.677 "method": "sock_set_default_impl", 00:05:22.677 "params": { 00:05:22.677 "impl_name": "uring" 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "sock_impl_set_options", 00:05:22.677 "params": { 00:05:22.677 "impl_name": "ssl", 00:05:22.677 "recv_buf_size": 4096, 00:05:22.677 "send_buf_size": 4096, 00:05:22.677 "enable_recv_pipe": true, 00:05:22.677 "enable_quickack": false, 00:05:22.677 "enable_placement_id": 0, 00:05:22.677 "enable_zerocopy_send_server": true, 00:05:22.677 "enable_zerocopy_send_client": false, 00:05:22.677 "zerocopy_threshold": 0, 00:05:22.677 "tls_version": 0, 00:05:22.677 "enable_ktls": false 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "sock_impl_set_options", 00:05:22.677 "params": { 00:05:22.677 "impl_name": "posix", 00:05:22.677 "recv_buf_size": 2097152, 00:05:22.677 "send_buf_size": 2097152, 00:05:22.677 "enable_recv_pipe": true, 00:05:22.677 "enable_quickack": false, 00:05:22.677 "enable_placement_id": 0, 00:05:22.677 "enable_zerocopy_send_server": true, 00:05:22.677 "enable_zerocopy_send_client": false, 00:05:22.677 "zerocopy_threshold": 0, 00:05:22.677 "tls_version": 0, 00:05:22.677 "enable_ktls": false 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "sock_impl_set_options", 00:05:22.677 "params": { 00:05:22.677 "impl_name": "uring", 00:05:22.677 "recv_buf_size": 2097152, 00:05:22.677 "send_buf_size": 2097152, 00:05:22.677 "enable_recv_pipe": true, 00:05:22.677 "enable_quickack": false, 00:05:22.677 "enable_placement_id": 0, 00:05:22.677 "enable_zerocopy_send_server": false, 00:05:22.677 "enable_zerocopy_send_client": false, 00:05:22.677 "zerocopy_threshold": 0, 00:05:22.677 "tls_version": 0, 00:05:22.677 "enable_ktls": false 00:05:22.677 } 00:05:22.677 } 00:05:22.677 ] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "vmd", 00:05:22.677 "config": [] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "accel", 00:05:22.677 "config": [ 00:05:22.677 { 00:05:22.677 "method": "accel_set_options", 00:05:22.677 "params": { 00:05:22.677 "small_cache_size": 128, 00:05:22.677 "large_cache_size": 16, 00:05:22.677 "task_count": 2048, 00:05:22.677 "sequence_count": 2048, 00:05:22.677 "buf_count": 2048 00:05:22.677 } 00:05:22.677 } 00:05:22.677 ] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "bdev", 00:05:22.677 "config": [ 00:05:22.677 { 00:05:22.677 "method": "bdev_set_options", 00:05:22.677 "params": { 00:05:22.677 "bdev_io_pool_size": 65535, 00:05:22.677 "bdev_io_cache_size": 256, 00:05:22.677 "bdev_auto_examine": true, 00:05:22.677 "iobuf_small_cache_size": 128, 00:05:22.677 "iobuf_large_cache_size": 16 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "bdev_raid_set_options", 00:05:22.677 "params": { 00:05:22.677 "process_window_size_kb": 1024, 00:05:22.677 "process_max_bandwidth_mb_sec": 0 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "bdev_iscsi_set_options", 00:05:22.677 "params": { 00:05:22.677 "timeout_sec": 30 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "bdev_nvme_set_options", 00:05:22.677 "params": { 00:05:22.677 "action_on_timeout": "none", 00:05:22.677 "timeout_us": 0, 00:05:22.677 "timeout_admin_us": 0, 00:05:22.677 "keep_alive_timeout_ms": 10000, 00:05:22.677 "arbitration_burst": 0, 00:05:22.677 "low_priority_weight": 0, 00:05:22.677 "medium_priority_weight": 0, 00:05:22.677 "high_priority_weight": 0, 00:05:22.677 "nvme_adminq_poll_period_us": 10000, 00:05:22.677 "nvme_ioq_poll_period_us": 0, 00:05:22.677 "io_queue_requests": 0, 00:05:22.677 "delay_cmd_submit": true, 00:05:22.677 "transport_retry_count": 4, 00:05:22.677 "bdev_retry_count": 3, 00:05:22.677 "transport_ack_timeout": 0, 00:05:22.677 "ctrlr_loss_timeout_sec": 0, 00:05:22.677 "reconnect_delay_sec": 0, 00:05:22.677 "fast_io_fail_timeout_sec": 0, 00:05:22.677 "disable_auto_failback": false, 00:05:22.677 "generate_uuids": false, 00:05:22.677 "transport_tos": 0, 00:05:22.677 "nvme_error_stat": false, 00:05:22.677 "rdma_srq_size": 0, 00:05:22.677 "io_path_stat": false, 00:05:22.677 "allow_accel_sequence": false, 00:05:22.677 "rdma_max_cq_size": 0, 00:05:22.677 "rdma_cm_event_timeout_ms": 0, 00:05:22.677 "dhchap_digests": [ 00:05:22.677 "sha256", 00:05:22.677 "sha384", 00:05:22.677 "sha512" 00:05:22.677 ], 00:05:22.677 "dhchap_dhgroups": [ 00:05:22.677 "null", 00:05:22.677 "ffdhe2048", 00:05:22.677 "ffdhe3072", 00:05:22.677 "ffdhe4096", 00:05:22.677 "ffdhe6144", 00:05:22.677 "ffdhe8192" 00:05:22.677 ] 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "bdev_nvme_set_hotplug", 00:05:22.677 "params": { 00:05:22.677 "period_us": 100000, 00:05:22.677 "enable": false 00:05:22.677 } 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "method": "bdev_wait_for_examine" 00:05:22.677 } 00:05:22.677 ] 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "scsi", 00:05:22.677 "config": null 00:05:22.677 }, 00:05:22.677 { 00:05:22.677 "subsystem": "scheduler", 00:05:22.677 "config": [ 00:05:22.677 { 00:05:22.678 "method": "framework_set_scheduler", 00:05:22.678 "params": { 00:05:22.678 "name": "static" 00:05:22.678 } 00:05:22.678 } 00:05:22.678 ] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "vhost_scsi", 00:05:22.678 "config": [] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "vhost_blk", 00:05:22.678 "config": [] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "ublk", 00:05:22.678 "config": [] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "nbd", 00:05:22.678 "config": [] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "nvmf", 00:05:22.678 "config": [ 00:05:22.678 { 00:05:22.678 "method": "nvmf_set_config", 00:05:22.678 "params": { 00:05:22.678 "discovery_filter": "match_any", 00:05:22.678 "admin_cmd_passthru": { 00:05:22.678 "identify_ctrlr": false 00:05:22.678 }, 00:05:22.678 "dhchap_digests": [ 00:05:22.678 "sha256", 00:05:22.678 "sha384", 00:05:22.678 "sha512" 00:05:22.678 ], 00:05:22.678 "dhchap_dhgroups": [ 00:05:22.678 "null", 00:05:22.678 "ffdhe2048", 00:05:22.678 "ffdhe3072", 00:05:22.678 "ffdhe4096", 00:05:22.678 "ffdhe6144", 00:05:22.678 "ffdhe8192" 00:05:22.678 ] 00:05:22.678 } 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "method": "nvmf_set_max_subsystems", 00:05:22.678 "params": { 00:05:22.678 "max_subsystems": 1024 00:05:22.678 } 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "method": "nvmf_set_crdt", 00:05:22.678 "params": { 00:05:22.678 "crdt1": 0, 00:05:22.678 "crdt2": 0, 00:05:22.678 "crdt3": 0 00:05:22.678 } 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "method": "nvmf_create_transport", 00:05:22.678 "params": { 00:05:22.678 "trtype": "TCP", 00:05:22.678 "max_queue_depth": 128, 00:05:22.678 "max_io_qpairs_per_ctrlr": 127, 00:05:22.678 "in_capsule_data_size": 4096, 00:05:22.678 "max_io_size": 131072, 00:05:22.678 "io_unit_size": 131072, 00:05:22.678 "max_aq_depth": 128, 00:05:22.678 "num_shared_buffers": 511, 00:05:22.678 "buf_cache_size": 4294967295, 00:05:22.678 "dif_insert_or_strip": false, 00:05:22.678 "zcopy": false, 00:05:22.678 "c2h_success": true, 00:05:22.678 "sock_priority": 0, 00:05:22.678 "abort_timeout_sec": 1, 00:05:22.678 "ack_timeout": 0, 00:05:22.678 "data_wr_pool_size": 0 00:05:22.678 } 00:05:22.678 } 00:05:22.678 ] 00:05:22.678 }, 00:05:22.678 { 00:05:22.678 "subsystem": "iscsi", 00:05:22.678 "config": [ 00:05:22.678 { 00:05:22.678 "method": "iscsi_set_options", 00:05:22.678 "params": { 00:05:22.678 "node_base": "iqn.2016-06.io.spdk", 00:05:22.678 "max_sessions": 128, 00:05:22.678 "max_connections_per_session": 2, 00:05:22.678 "max_queue_depth": 64, 00:05:22.678 "default_time2wait": 2, 00:05:22.678 "default_time2retain": 20, 00:05:22.678 "first_burst_length": 8192, 00:05:22.678 "immediate_data": true, 00:05:22.678 "allow_duplicated_isid": false, 00:05:22.678 "error_recovery_level": 0, 00:05:22.678 "nop_timeout": 60, 00:05:22.678 "nop_in_interval": 30, 00:05:22.678 "disable_chap": false, 00:05:22.678 "require_chap": false, 00:05:22.678 "mutual_chap": false, 00:05:22.678 "chap_group": 0, 00:05:22.678 "max_large_datain_per_connection": 64, 00:05:22.678 "max_r2t_per_connection": 4, 00:05:22.678 "pdu_pool_size": 36864, 00:05:22.678 "immediate_data_pool_size": 16384, 00:05:22.678 "data_out_pool_size": 2048 00:05:22.678 } 00:05:22.678 } 00:05:22.678 ] 00:05:22.678 } 00:05:22.678 ] 00:05:22.678 } 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56999 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56999 ']' 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56999 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56999 00:05:22.678 killing process with pid 56999 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56999' 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56999 00:05:22.678 12:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56999 00:05:23.248 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57032 00:05:23.248 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:23.248 12:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57032 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57032 ']' 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57032 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57032 00:05:28.522 killing process with pid 57032 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57032' 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57032 00:05:28.522 12:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57032 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.780 ************************************ 00:05:28.780 END TEST skip_rpc_with_json 00:05:28.780 ************************************ 00:05:28.780 00:05:28.780 real 0m7.376s 00:05:28.780 user 0m6.957s 00:05:28.780 sys 0m0.870s 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.780 12:52:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.780 12:52:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.780 12:52:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.780 12:52:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.780 ************************************ 00:05:28.780 START TEST skip_rpc_with_delay 00:05:28.780 ************************************ 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.780 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.038 [2024-11-29 12:52:00.318970] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.038 00:05:29.038 real 0m0.092s 00:05:29.038 user 0m0.070s 00:05:29.038 sys 0m0.021s 00:05:29.038 ************************************ 00:05:29.038 END TEST skip_rpc_with_delay 00:05:29.038 ************************************ 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.038 12:52:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.038 12:52:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.038 12:52:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.038 12:52:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.038 12:52:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.038 12:52:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.038 12:52:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.038 ************************************ 00:05:29.038 START TEST exit_on_failed_rpc_init 00:05:29.038 ************************************ 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57141 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57141 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57141 ']' 00:05:29.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.038 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.038 [2024-11-29 12:52:00.451207] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:29.038 [2024-11-29 12:52:00.451278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:05:29.296 [2024-11-29 12:52:00.590765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.296 [2024-11-29 12:52:00.649093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.296 [2024-11-29 12:52:00.725876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:29.559 12:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.559 [2024-11-29 12:52:01.032262] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:29.559 [2024-11-29 12:52:01.032372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57152 ] 00:05:29.818 [2024-11-29 12:52:01.185797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.818 [2024-11-29 12:52:01.256688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.818 [2024-11-29 12:52:01.256782] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.818 [2024-11-29 12:52:01.256800] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.818 [2024-11-29 12:52:01.256811] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57141 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57141 ']' 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57141 00:05:29.818 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57141 00:05:30.076 killing process with pid 57141 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57141' 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57141 00:05:30.076 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57141 00:05:30.334 ************************************ 00:05:30.334 END TEST exit_on_failed_rpc_init 00:05:30.334 ************************************ 00:05:30.334 00:05:30.334 real 0m1.374s 00:05:30.334 user 0m1.435s 00:05:30.334 sys 0m0.422s 00:05:30.334 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.334 12:52:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.334 12:52:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.334 ************************************ 00:05:30.334 END TEST skip_rpc 00:05:30.334 ************************************ 00:05:30.334 00:05:30.334 real 0m14.863s 00:05:30.334 user 0m13.837s 00:05:30.334 sys 0m1.849s 00:05:30.334 12:52:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.334 12:52:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.592 12:52:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.592 12:52:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.592 12:52:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.592 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.592 ************************************ 00:05:30.592 START TEST rpc_client 00:05:30.592 ************************************ 00:05:30.592 12:52:01 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.592 * Looking for test storage... 00:05:30.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:30.592 12:52:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.592 12:52:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.592 12:52:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.592 12:52:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.592 --rc genhtml_branch_coverage=1 00:05:30.592 --rc genhtml_function_coverage=1 00:05:30.592 --rc genhtml_legend=1 00:05:30.592 --rc geninfo_all_blocks=1 00:05:30.592 --rc geninfo_unexecuted_blocks=1 00:05:30.592 00:05:30.592 ' 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.592 --rc genhtml_branch_coverage=1 00:05:30.592 --rc genhtml_function_coverage=1 00:05:30.592 --rc genhtml_legend=1 00:05:30.592 --rc geninfo_all_blocks=1 00:05:30.592 --rc geninfo_unexecuted_blocks=1 00:05:30.592 00:05:30.592 ' 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.592 --rc genhtml_branch_coverage=1 00:05:30.592 --rc genhtml_function_coverage=1 00:05:30.592 --rc genhtml_legend=1 00:05:30.592 --rc geninfo_all_blocks=1 00:05:30.592 --rc geninfo_unexecuted_blocks=1 00:05:30.592 00:05:30.592 ' 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.592 --rc genhtml_branch_coverage=1 00:05:30.592 --rc genhtml_function_coverage=1 00:05:30.592 --rc genhtml_legend=1 00:05:30.592 --rc geninfo_all_blocks=1 00:05:30.592 --rc geninfo_unexecuted_blocks=1 00:05:30.592 00:05:30.592 ' 00:05:30.592 12:52:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:30.592 OK 00:05:30.592 12:52:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:30.592 00:05:30.592 real 0m0.216s 00:05:30.592 user 0m0.132s 00:05:30.592 sys 0m0.092s 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.592 12:52:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:30.592 ************************************ 00:05:30.592 END TEST rpc_client 00:05:30.592 ************************************ 00:05:30.851 12:52:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:30.851 12:52:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.851 12:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.851 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:30.851 ************************************ 00:05:30.851 START TEST json_config 00:05:30.851 ************************************ 00:05:30.851 12:52:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:30.851 12:52:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.851 12:52:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.851 12:52:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.851 12:52:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.851 12:52:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.851 12:52:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.851 12:52:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.851 12:52:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.851 12:52:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.851 12:52:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.851 12:52:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.851 12:52:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.851 12:52:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.851 12:52:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.851 12:52:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.851 12:52:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:30.851 12:52:02 json_config -- scripts/common.sh@345 -- # : 1 00:05:30.852 12:52:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.852 12:52:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.852 12:52:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:30.852 12:52:02 json_config -- scripts/common.sh@353 -- # local d=1 00:05:30.852 12:52:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.852 12:52:02 json_config -- scripts/common.sh@355 -- # echo 1 00:05:30.852 12:52:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.852 12:52:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:30.852 12:52:02 json_config -- scripts/common.sh@353 -- # local d=2 00:05:30.852 12:52:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.852 12:52:02 json_config -- scripts/common.sh@355 -- # echo 2 00:05:30.852 12:52:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.852 12:52:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.852 12:52:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.852 12:52:02 json_config -- scripts/common.sh@368 -- # return 0 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.852 --rc genhtml_branch_coverage=1 00:05:30.852 --rc genhtml_function_coverage=1 00:05:30.852 --rc genhtml_legend=1 00:05:30.852 --rc geninfo_all_blocks=1 00:05:30.852 --rc geninfo_unexecuted_blocks=1 00:05:30.852 00:05:30.852 ' 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.852 --rc genhtml_branch_coverage=1 00:05:30.852 --rc genhtml_function_coverage=1 00:05:30.852 --rc genhtml_legend=1 00:05:30.852 --rc geninfo_all_blocks=1 00:05:30.852 --rc geninfo_unexecuted_blocks=1 00:05:30.852 00:05:30.852 ' 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.852 --rc genhtml_branch_coverage=1 00:05:30.852 --rc genhtml_function_coverage=1 00:05:30.852 --rc genhtml_legend=1 00:05:30.852 --rc geninfo_all_blocks=1 00:05:30.852 --rc geninfo_unexecuted_blocks=1 00:05:30.852 00:05:30.852 ' 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.852 --rc genhtml_branch_coverage=1 00:05:30.852 --rc genhtml_function_coverage=1 00:05:30.852 --rc genhtml_legend=1 00:05:30.852 --rc geninfo_all_blocks=1 00:05:30.852 --rc geninfo_unexecuted_blocks=1 00:05:30.852 00:05:30.852 ' 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:30.852 12:52:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.852 12:52:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.852 12:52:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.852 12:52:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.852 12:52:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.852 12:52:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.852 12:52:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.852 12:52:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:30.852 12:52:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@51 -- # : 0 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.852 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.852 12:52:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:30.852 INFO: JSON configuration test init 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.852 12:52:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:30.852 12:52:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.852 12:52:02 json_config -- json_config/common.sh@10 -- # shift 00:05:30.852 12:52:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.852 12:52:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.852 12:52:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.852 12:52:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.852 12:52:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.852 12:52:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57291 00:05:30.852 12:52:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:30.852 12:52:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.852 Waiting for target to run... 00:05:30.852 12:52:02 json_config -- json_config/common.sh@25 -- # waitforlisten 57291 /var/tmp/spdk_tgt.sock 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 57291 ']' 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.852 12:52:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.853 12:52:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.853 12:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.111 [2024-11-29 12:52:02.415808] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:31.111 [2024-11-29 12:52:02.415964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57291 ] 00:05:31.370 [2024-11-29 12:52:02.866977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.629 [2024-11-29 12:52:02.921633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:32.211 00:05:32.211 12:52:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.211 12:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.211 12:52:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:32.211 12:52:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.469 [2024-11-29 12:52:03.807533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.727 12:52:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:32.727 12:52:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.728 12:52:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.728 12:52:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:32.728 12:52:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:32.728 12:52:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@54 -- # sort 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:32.998 12:52:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.998 12:52:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:32.998 12:52:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.998 12:52:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:32.998 12:52:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.998 12:52:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.264 MallocForNvmf0 00:05:33.264 12:52:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.264 12:52:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.523 MallocForNvmf1 00:05:33.523 12:52:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.523 12:52:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.791 [2024-11-29 12:52:05.231486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.791 12:52:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.791 12:52:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.050 12:52:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.050 12:52:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.308 12:52:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.308 12:52:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.566 12:52:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.566 12:52:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.825 [2024-11-29 12:52:06.152102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.825 12:52:06 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:34.825 12:52:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.825 12:52:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 12:52:06 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:34.825 12:52:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.825 12:52:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 12:52:06 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:34.825 12:52:06 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.825 12:52:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.083 MallocBdevForConfigChangeCheck 00:05:35.083 12:52:06 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:35.083 12:52:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.083 12:52:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.083 12:52:06 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:35.083 12:52:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.649 INFO: shutting down applications... 00:05:35.649 12:52:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:35.649 12:52:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:35.649 12:52:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:35.649 12:52:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:35.649 12:52:07 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.908 Calling clear_iscsi_subsystem 00:05:35.908 Calling clear_nvmf_subsystem 00:05:35.908 Calling clear_nbd_subsystem 00:05:35.908 Calling clear_ublk_subsystem 00:05:35.908 Calling clear_vhost_blk_subsystem 00:05:35.908 Calling clear_vhost_scsi_subsystem 00:05:35.908 Calling clear_bdev_subsystem 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.908 12:52:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.476 12:52:07 json_config -- json_config/json_config.sh@352 -- # break 00:05:36.476 12:52:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:36.476 12:52:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:36.476 12:52:07 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.476 12:52:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.476 12:52:07 json_config -- json_config/common.sh@35 -- # [[ -n 57291 ]] 00:05:36.476 12:52:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57291 00:05:36.476 12:52:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.476 12:52:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.476 12:52:07 json_config -- json_config/common.sh@41 -- # kill -0 57291 00:05:36.476 12:52:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.042 12:52:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.042 12:52:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.042 12:52:08 json_config -- json_config/common.sh@41 -- # kill -0 57291 00:05:37.042 12:52:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.042 SPDK target shutdown done 00:05:37.042 INFO: relaunching applications... 00:05:37.042 12:52:08 json_config -- json_config/common.sh@43 -- # break 00:05:37.042 12:52:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.042 12:52:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.042 12:52:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:37.042 12:52:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.042 12:52:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.042 12:52:08 json_config -- json_config/common.sh@10 -- # shift 00:05:37.042 12:52:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.042 12:52:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.042 12:52:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.042 12:52:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.042 12:52:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.042 12:52:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57487 00:05:37.042 12:52:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.042 12:52:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.042 Waiting for target to run... 00:05:37.042 12:52:08 json_config -- json_config/common.sh@25 -- # waitforlisten 57487 /var/tmp/spdk_tgt.sock 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 57487 ']' 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.042 12:52:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 [2024-11-29 12:52:08.353721] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:37.043 [2024-11-29 12:52:08.353834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57487 ] 00:05:37.608 [2024-11-29 12:52:08.860958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.608 [2024-11-29 12:52:08.911326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.609 [2024-11-29 12:52:09.048034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.867 [2024-11-29 12:52:09.262817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.867 [2024-11-29 12:52:09.294896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.867 12:52:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.867 00:05:37.867 INFO: Checking if target configuration is the same... 00:05:37.867 12:52:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:37.867 12:52:09 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.867 12:52:09 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:37.867 12:52:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.867 12:52:09 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:37.867 12:52:09 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.867 12:52:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.867 + '[' 2 -ne 2 ']' 00:05:37.867 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:37.867 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:37.867 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:37.867 +++ basename /dev/fd/62 00:05:37.867 ++ mktemp /tmp/62.XXX 00:05:37.867 + tmp_file_1=/tmp/62.IRD 00:05:37.867 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.867 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.867 + tmp_file_2=/tmp/spdk_tgt_config.json.rba 00:05:37.867 + ret=0 00:05:37.867 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.434 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.434 + diff -u /tmp/62.IRD /tmp/spdk_tgt_config.json.rba 00:05:38.434 INFO: JSON config files are the same 00:05:38.434 + echo 'INFO: JSON config files are the same' 00:05:38.434 + rm /tmp/62.IRD /tmp/spdk_tgt_config.json.rba 00:05:38.434 + exit 0 00:05:38.434 INFO: changing configuration and checking if this can be detected... 00:05:38.434 12:52:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:38.434 12:52:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.434 12:52:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.434 12:52:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.692 12:52:10 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.692 12:52:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:38.692 12:52:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.692 + '[' 2 -ne 2 ']' 00:05:38.692 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:38.692 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:38.692 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.692 +++ basename /dev/fd/62 00:05:38.692 ++ mktemp /tmp/62.XXX 00:05:38.692 + tmp_file_1=/tmp/62.x8H 00:05:38.692 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.692 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.692 + tmp_file_2=/tmp/spdk_tgt_config.json.5sE 00:05:38.692 + ret=0 00:05:38.692 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.259 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.259 + diff -u /tmp/62.x8H /tmp/spdk_tgt_config.json.5sE 00:05:39.259 + ret=1 00:05:39.259 + echo '=== Start of file: /tmp/62.x8H ===' 00:05:39.259 + cat /tmp/62.x8H 00:05:39.259 + echo '=== End of file: /tmp/62.x8H ===' 00:05:39.259 + echo '' 00:05:39.260 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5sE ===' 00:05:39.260 + cat /tmp/spdk_tgt_config.json.5sE 00:05:39.260 + echo '=== End of file: /tmp/spdk_tgt_config.json.5sE ===' 00:05:39.260 + echo '' 00:05:39.260 + rm /tmp/62.x8H /tmp/spdk_tgt_config.json.5sE 00:05:39.260 + exit 1 00:05:39.260 INFO: configuration change detected. 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@324 -- # [[ -n 57487 ]] 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.260 12:52:10 json_config -- json_config/json_config.sh@330 -- # killprocess 57487 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@954 -- # '[' -z 57487 ']' 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@958 -- # kill -0 57487 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@959 -- # uname 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57487 00:05:39.260 killing process with pid 57487 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57487' 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@973 -- # kill 57487 00:05:39.260 12:52:10 json_config -- common/autotest_common.sh@978 -- # wait 57487 00:05:39.519 12:52:10 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.519 12:52:10 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:39.519 12:52:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.519 12:52:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.519 12:52:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:39.519 INFO: Success 00:05:39.519 12:52:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:39.519 00:05:39.519 real 0m8.775s 00:05:39.519 user 0m12.398s 00:05:39.519 sys 0m1.918s 00:05:39.519 12:52:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.519 12:52:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.519 ************************************ 00:05:39.519 END TEST json_config 00:05:39.519 ************************************ 00:05:39.519 12:52:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.519 12:52:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.519 12:52:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.519 12:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.519 ************************************ 00:05:39.519 START TEST json_config_extra_key 00:05:39.519 ************************************ 00:05:39.519 12:52:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.519 12:52:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.519 12:52:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.519 12:52:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.779 12:52:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.779 12:52:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.780 --rc genhtml_branch_coverage=1 00:05:39.780 --rc genhtml_function_coverage=1 00:05:39.780 --rc genhtml_legend=1 00:05:39.780 --rc geninfo_all_blocks=1 00:05:39.780 --rc geninfo_unexecuted_blocks=1 00:05:39.780 00:05:39.780 ' 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.780 --rc genhtml_branch_coverage=1 00:05:39.780 --rc genhtml_function_coverage=1 00:05:39.780 --rc genhtml_legend=1 00:05:39.780 --rc geninfo_all_blocks=1 00:05:39.780 --rc geninfo_unexecuted_blocks=1 00:05:39.780 00:05:39.780 ' 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.780 --rc genhtml_branch_coverage=1 00:05:39.780 --rc genhtml_function_coverage=1 00:05:39.780 --rc genhtml_legend=1 00:05:39.780 --rc geninfo_all_blocks=1 00:05:39.780 --rc geninfo_unexecuted_blocks=1 00:05:39.780 00:05:39.780 ' 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.780 --rc genhtml_branch_coverage=1 00:05:39.780 --rc genhtml_function_coverage=1 00:05:39.780 --rc genhtml_legend=1 00:05:39.780 --rc geninfo_all_blocks=1 00:05:39.780 --rc geninfo_unexecuted_blocks=1 00:05:39.780 00:05:39.780 ' 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.780 12:52:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.780 12:52:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.780 12:52:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.780 12:52:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.780 12:52:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.780 12:52:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.780 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.780 12:52:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.780 INFO: launching applications... 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.780 12:52:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57641 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.780 Waiting for target to run... 00:05:39.780 12:52:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57641 /var/tmp/spdk_tgt.sock 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57641 ']' 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.780 12:52:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.781 12:52:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.781 12:52:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.781 12:52:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.781 [2024-11-29 12:52:11.193057] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:39.781 [2024-11-29 12:52:11.193145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57641 ] 00:05:40.349 [2024-11-29 12:52:11.619096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.349 [2024-11-29 12:52:11.667769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.349 [2024-11-29 12:52:11.702210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.917 12:52:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.917 12:52:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:40.917 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:40.917 INFO: shutting down applications... 00:05:40.917 12:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:40.917 12:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57641 ]] 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57641 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57641 00:05:40.917 12:52:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57641 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.485 SPDK target shutdown done 00:05:41.485 12:52:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.485 Success 00:05:41.485 12:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.485 00:05:41.485 real 0m1.770s 00:05:41.485 user 0m1.709s 00:05:41.485 sys 0m0.466s 00:05:41.485 12:52:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.485 ************************************ 00:05:41.485 END TEST json_config_extra_key 00:05:41.485 ************************************ 00:05:41.485 12:52:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.485 12:52:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.485 12:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.485 12:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.485 12:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.485 ************************************ 00:05:41.485 START TEST alias_rpc 00:05:41.485 ************************************ 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.485 * Looking for test storage... 00:05:41.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.485 12:52:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.485 --rc genhtml_branch_coverage=1 00:05:41.485 --rc genhtml_function_coverage=1 00:05:41.485 --rc genhtml_legend=1 00:05:41.485 --rc geninfo_all_blocks=1 00:05:41.485 --rc geninfo_unexecuted_blocks=1 00:05:41.485 00:05:41.485 ' 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.485 --rc genhtml_branch_coverage=1 00:05:41.485 --rc genhtml_function_coverage=1 00:05:41.485 --rc genhtml_legend=1 00:05:41.485 --rc geninfo_all_blocks=1 00:05:41.485 --rc geninfo_unexecuted_blocks=1 00:05:41.485 00:05:41.485 ' 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.485 --rc genhtml_branch_coverage=1 00:05:41.485 --rc genhtml_function_coverage=1 00:05:41.485 --rc genhtml_legend=1 00:05:41.485 --rc geninfo_all_blocks=1 00:05:41.485 --rc geninfo_unexecuted_blocks=1 00:05:41.485 00:05:41.485 ' 00:05:41.485 12:52:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.485 --rc genhtml_branch_coverage=1 00:05:41.486 --rc genhtml_function_coverage=1 00:05:41.486 --rc genhtml_legend=1 00:05:41.486 --rc geninfo_all_blocks=1 00:05:41.486 --rc geninfo_unexecuted_blocks=1 00:05:41.486 00:05:41.486 ' 00:05:41.486 12:52:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.486 12:52:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57719 00:05:41.486 12:52:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.486 12:52:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57719 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57719 ']' 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.486 12:52:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.745 [2024-11-29 12:52:13.053369] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:41.745 [2024-11-29 12:52:13.053531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57719 ] 00:05:41.745 [2024-11-29 12:52:13.201334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.004 [2024-11-29 12:52:13.258053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.004 [2024-11-29 12:52:13.328403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.263 12:52:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.263 12:52:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.263 12:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:42.522 12:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57719 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57719 ']' 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57719 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57719 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57719' 00:05:42.522 killing process with pid 57719 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 57719 00:05:42.522 12:52:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 57719 00:05:42.813 00:05:42.813 real 0m1.482s 00:05:42.813 user 0m1.511s 00:05:42.813 sys 0m0.494s 00:05:42.813 12:52:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.813 ************************************ 00:05:42.813 END TEST alias_rpc 00:05:42.813 ************************************ 00:05:42.813 12:52:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.813 12:52:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:42.813 12:52:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.813 12:52:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.813 12:52:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.813 12:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:42.813 ************************************ 00:05:42.813 START TEST spdkcli_tcp 00:05:42.813 ************************************ 00:05:42.813 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.072 * Looking for test storage... 00:05:43.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.072 12:52:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.072 --rc genhtml_branch_coverage=1 00:05:43.072 --rc genhtml_function_coverage=1 00:05:43.072 --rc genhtml_legend=1 00:05:43.072 --rc geninfo_all_blocks=1 00:05:43.072 --rc geninfo_unexecuted_blocks=1 00:05:43.072 00:05:43.072 ' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.072 --rc genhtml_branch_coverage=1 00:05:43.072 --rc genhtml_function_coverage=1 00:05:43.072 --rc genhtml_legend=1 00:05:43.072 --rc geninfo_all_blocks=1 00:05:43.072 --rc geninfo_unexecuted_blocks=1 00:05:43.072 00:05:43.072 ' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.072 --rc genhtml_branch_coverage=1 00:05:43.072 --rc genhtml_function_coverage=1 00:05:43.072 --rc genhtml_legend=1 00:05:43.072 --rc geninfo_all_blocks=1 00:05:43.072 --rc geninfo_unexecuted_blocks=1 00:05:43.072 00:05:43.072 ' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.072 --rc genhtml_branch_coverage=1 00:05:43.072 --rc genhtml_function_coverage=1 00:05:43.072 --rc genhtml_legend=1 00:05:43.072 --rc geninfo_all_blocks=1 00:05:43.072 --rc geninfo_unexecuted_blocks=1 00:05:43.072 00:05:43.072 ' 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57790 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57790 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57790 ']' 00:05:43.072 12:52:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.072 12:52:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.072 [2024-11-29 12:52:14.561865] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:43.072 [2024-11-29 12:52:14.561948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:05:43.333 [2024-11-29 12:52:14.702449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.333 [2024-11-29 12:52:14.760925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.333 [2024-11-29 12:52:14.760941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.333 [2024-11-29 12:52:14.828201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.592 12:52:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.592 12:52:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:43.592 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57805 00:05:43.592 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.592 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.851 [ 00:05:43.851 "bdev_malloc_delete", 00:05:43.851 "bdev_malloc_create", 00:05:43.851 "bdev_null_resize", 00:05:43.851 "bdev_null_delete", 00:05:43.851 "bdev_null_create", 00:05:43.851 "bdev_nvme_cuse_unregister", 00:05:43.851 "bdev_nvme_cuse_register", 00:05:43.851 "bdev_opal_new_user", 00:05:43.851 "bdev_opal_set_lock_state", 00:05:43.851 "bdev_opal_delete", 00:05:43.851 "bdev_opal_get_info", 00:05:43.851 "bdev_opal_create", 00:05:43.851 "bdev_nvme_opal_revert", 00:05:43.851 "bdev_nvme_opal_init", 00:05:43.851 "bdev_nvme_send_cmd", 00:05:43.851 "bdev_nvme_set_keys", 00:05:43.852 "bdev_nvme_get_path_iostat", 00:05:43.852 "bdev_nvme_get_mdns_discovery_info", 00:05:43.852 "bdev_nvme_stop_mdns_discovery", 00:05:43.852 "bdev_nvme_start_mdns_discovery", 00:05:43.852 "bdev_nvme_set_multipath_policy", 00:05:43.852 "bdev_nvme_set_preferred_path", 00:05:43.852 "bdev_nvme_get_io_paths", 00:05:43.852 "bdev_nvme_remove_error_injection", 00:05:43.852 "bdev_nvme_add_error_injection", 00:05:43.852 "bdev_nvme_get_discovery_info", 00:05:43.852 "bdev_nvme_stop_discovery", 00:05:43.852 "bdev_nvme_start_discovery", 00:05:43.852 "bdev_nvme_get_controller_health_info", 00:05:43.852 "bdev_nvme_disable_controller", 00:05:43.852 "bdev_nvme_enable_controller", 00:05:43.852 "bdev_nvme_reset_controller", 00:05:43.852 "bdev_nvme_get_transport_statistics", 00:05:43.852 "bdev_nvme_apply_firmware", 00:05:43.852 "bdev_nvme_detach_controller", 00:05:43.852 "bdev_nvme_get_controllers", 00:05:43.852 "bdev_nvme_attach_controller", 00:05:43.852 "bdev_nvme_set_hotplug", 00:05:43.852 "bdev_nvme_set_options", 00:05:43.852 "bdev_passthru_delete", 00:05:43.852 "bdev_passthru_create", 00:05:43.852 "bdev_lvol_set_parent_bdev", 00:05:43.852 "bdev_lvol_set_parent", 00:05:43.852 "bdev_lvol_check_shallow_copy", 00:05:43.852 "bdev_lvol_start_shallow_copy", 00:05:43.852 "bdev_lvol_grow_lvstore", 00:05:43.852 "bdev_lvol_get_lvols", 00:05:43.852 "bdev_lvol_get_lvstores", 00:05:43.852 "bdev_lvol_delete", 00:05:43.852 "bdev_lvol_set_read_only", 00:05:43.852 "bdev_lvol_resize", 00:05:43.852 "bdev_lvol_decouple_parent", 00:05:43.852 "bdev_lvol_inflate", 00:05:43.852 "bdev_lvol_rename", 00:05:43.852 "bdev_lvol_clone_bdev", 00:05:43.852 "bdev_lvol_clone", 00:05:43.852 "bdev_lvol_snapshot", 00:05:43.852 "bdev_lvol_create", 00:05:43.852 "bdev_lvol_delete_lvstore", 00:05:43.852 "bdev_lvol_rename_lvstore", 00:05:43.852 "bdev_lvol_create_lvstore", 00:05:43.852 "bdev_raid_set_options", 00:05:43.852 "bdev_raid_remove_base_bdev", 00:05:43.852 "bdev_raid_add_base_bdev", 00:05:43.852 "bdev_raid_delete", 00:05:43.852 "bdev_raid_create", 00:05:43.852 "bdev_raid_get_bdevs", 00:05:43.852 "bdev_error_inject_error", 00:05:43.852 "bdev_error_delete", 00:05:43.852 "bdev_error_create", 00:05:43.852 "bdev_split_delete", 00:05:43.852 "bdev_split_create", 00:05:43.852 "bdev_delay_delete", 00:05:43.852 "bdev_delay_create", 00:05:43.852 "bdev_delay_update_latency", 00:05:43.852 "bdev_zone_block_delete", 00:05:43.852 "bdev_zone_block_create", 00:05:43.852 "blobfs_create", 00:05:43.852 "blobfs_detect", 00:05:43.852 "blobfs_set_cache_size", 00:05:43.852 "bdev_aio_delete", 00:05:43.852 "bdev_aio_rescan", 00:05:43.852 "bdev_aio_create", 00:05:43.852 "bdev_ftl_set_property", 00:05:43.852 "bdev_ftl_get_properties", 00:05:43.852 "bdev_ftl_get_stats", 00:05:43.852 "bdev_ftl_unmap", 00:05:43.852 "bdev_ftl_unload", 00:05:43.852 "bdev_ftl_delete", 00:05:43.852 "bdev_ftl_load", 00:05:43.852 "bdev_ftl_create", 00:05:43.852 "bdev_virtio_attach_controller", 00:05:43.852 "bdev_virtio_scsi_get_devices", 00:05:43.852 "bdev_virtio_detach_controller", 00:05:43.852 "bdev_virtio_blk_set_hotplug", 00:05:43.852 "bdev_iscsi_delete", 00:05:43.852 "bdev_iscsi_create", 00:05:43.852 "bdev_iscsi_set_options", 00:05:43.852 "bdev_uring_delete", 00:05:43.852 "bdev_uring_rescan", 00:05:43.852 "bdev_uring_create", 00:05:43.852 "accel_error_inject_error", 00:05:43.852 "ioat_scan_accel_module", 00:05:43.852 "dsa_scan_accel_module", 00:05:43.852 "iaa_scan_accel_module", 00:05:43.852 "keyring_file_remove_key", 00:05:43.852 "keyring_file_add_key", 00:05:43.852 "keyring_linux_set_options", 00:05:43.852 "fsdev_aio_delete", 00:05:43.852 "fsdev_aio_create", 00:05:43.852 "iscsi_get_histogram", 00:05:43.852 "iscsi_enable_histogram", 00:05:43.852 "iscsi_set_options", 00:05:43.852 "iscsi_get_auth_groups", 00:05:43.852 "iscsi_auth_group_remove_secret", 00:05:43.852 "iscsi_auth_group_add_secret", 00:05:43.852 "iscsi_delete_auth_group", 00:05:43.852 "iscsi_create_auth_group", 00:05:43.852 "iscsi_set_discovery_auth", 00:05:43.852 "iscsi_get_options", 00:05:43.852 "iscsi_target_node_request_logout", 00:05:43.852 "iscsi_target_node_set_redirect", 00:05:43.852 "iscsi_target_node_set_auth", 00:05:43.852 "iscsi_target_node_add_lun", 00:05:43.852 "iscsi_get_stats", 00:05:43.852 "iscsi_get_connections", 00:05:43.852 "iscsi_portal_group_set_auth", 00:05:43.852 "iscsi_start_portal_group", 00:05:43.852 "iscsi_delete_portal_group", 00:05:43.852 "iscsi_create_portal_group", 00:05:43.852 "iscsi_get_portal_groups", 00:05:43.852 "iscsi_delete_target_node", 00:05:43.852 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.852 "iscsi_target_node_add_pg_ig_maps", 00:05:43.852 "iscsi_create_target_node", 00:05:43.852 "iscsi_get_target_nodes", 00:05:43.852 "iscsi_delete_initiator_group", 00:05:43.852 "iscsi_initiator_group_remove_initiators", 00:05:43.852 "iscsi_initiator_group_add_initiators", 00:05:43.852 "iscsi_create_initiator_group", 00:05:43.852 "iscsi_get_initiator_groups", 00:05:43.852 "nvmf_set_crdt", 00:05:43.852 "nvmf_set_config", 00:05:43.852 "nvmf_set_max_subsystems", 00:05:43.852 "nvmf_stop_mdns_prr", 00:05:43.852 "nvmf_publish_mdns_prr", 00:05:43.852 "nvmf_subsystem_get_listeners", 00:05:43.852 "nvmf_subsystem_get_qpairs", 00:05:43.852 "nvmf_subsystem_get_controllers", 00:05:43.852 "nvmf_get_stats", 00:05:43.852 "nvmf_get_transports", 00:05:43.852 "nvmf_create_transport", 00:05:43.852 "nvmf_get_targets", 00:05:43.852 "nvmf_delete_target", 00:05:43.852 "nvmf_create_target", 00:05:43.852 "nvmf_subsystem_allow_any_host", 00:05:43.852 "nvmf_subsystem_set_keys", 00:05:43.852 "nvmf_subsystem_remove_host", 00:05:43.852 "nvmf_subsystem_add_host", 00:05:43.852 "nvmf_ns_remove_host", 00:05:43.852 "nvmf_ns_add_host", 00:05:43.852 "nvmf_subsystem_remove_ns", 00:05:43.852 "nvmf_subsystem_set_ns_ana_group", 00:05:43.852 "nvmf_subsystem_add_ns", 00:05:43.852 "nvmf_subsystem_listener_set_ana_state", 00:05:43.852 "nvmf_discovery_get_referrals", 00:05:43.852 "nvmf_discovery_remove_referral", 00:05:43.852 "nvmf_discovery_add_referral", 00:05:43.852 "nvmf_subsystem_remove_listener", 00:05:43.852 "nvmf_subsystem_add_listener", 00:05:43.852 "nvmf_delete_subsystem", 00:05:43.852 "nvmf_create_subsystem", 00:05:43.852 "nvmf_get_subsystems", 00:05:43.852 "env_dpdk_get_mem_stats", 00:05:43.852 "nbd_get_disks", 00:05:43.852 "nbd_stop_disk", 00:05:43.852 "nbd_start_disk", 00:05:43.852 "ublk_recover_disk", 00:05:43.852 "ublk_get_disks", 00:05:43.852 "ublk_stop_disk", 00:05:43.852 "ublk_start_disk", 00:05:43.852 "ublk_destroy_target", 00:05:43.852 "ublk_create_target", 00:05:43.852 "virtio_blk_create_transport", 00:05:43.852 "virtio_blk_get_transports", 00:05:43.852 "vhost_controller_set_coalescing", 00:05:43.852 "vhost_get_controllers", 00:05:43.852 "vhost_delete_controller", 00:05:43.852 "vhost_create_blk_controller", 00:05:43.852 "vhost_scsi_controller_remove_target", 00:05:43.852 "vhost_scsi_controller_add_target", 00:05:43.852 "vhost_start_scsi_controller", 00:05:43.852 "vhost_create_scsi_controller", 00:05:43.852 "thread_set_cpumask", 00:05:43.852 "scheduler_set_options", 00:05:43.852 "framework_get_governor", 00:05:43.852 "framework_get_scheduler", 00:05:43.852 "framework_set_scheduler", 00:05:43.852 "framework_get_reactors", 00:05:43.852 "thread_get_io_channels", 00:05:43.852 "thread_get_pollers", 00:05:43.852 "thread_get_stats", 00:05:43.852 "framework_monitor_context_switch", 00:05:43.852 "spdk_kill_instance", 00:05:43.852 "log_enable_timestamps", 00:05:43.852 "log_get_flags", 00:05:43.852 "log_clear_flag", 00:05:43.852 "log_set_flag", 00:05:43.852 "log_get_level", 00:05:43.852 "log_set_level", 00:05:43.852 "log_get_print_level", 00:05:43.852 "log_set_print_level", 00:05:43.852 "framework_enable_cpumask_locks", 00:05:43.852 "framework_disable_cpumask_locks", 00:05:43.852 "framework_wait_init", 00:05:43.852 "framework_start_init", 00:05:43.852 "scsi_get_devices", 00:05:43.852 "bdev_get_histogram", 00:05:43.852 "bdev_enable_histogram", 00:05:43.852 "bdev_set_qos_limit", 00:05:43.852 "bdev_set_qd_sampling_period", 00:05:43.852 "bdev_get_bdevs", 00:05:43.852 "bdev_reset_iostat", 00:05:43.852 "bdev_get_iostat", 00:05:43.852 "bdev_examine", 00:05:43.852 "bdev_wait_for_examine", 00:05:43.852 "bdev_set_options", 00:05:43.852 "accel_get_stats", 00:05:43.852 "accel_set_options", 00:05:43.852 "accel_set_driver", 00:05:43.852 "accel_crypto_key_destroy", 00:05:43.852 "accel_crypto_keys_get", 00:05:43.852 "accel_crypto_key_create", 00:05:43.852 "accel_assign_opc", 00:05:43.852 "accel_get_module_info", 00:05:43.852 "accel_get_opc_assignments", 00:05:43.852 "vmd_rescan", 00:05:43.852 "vmd_remove_device", 00:05:43.852 "vmd_enable", 00:05:43.852 "sock_get_default_impl", 00:05:43.852 "sock_set_default_impl", 00:05:43.852 "sock_impl_set_options", 00:05:43.852 "sock_impl_get_options", 00:05:43.852 "iobuf_get_stats", 00:05:43.852 "iobuf_set_options", 00:05:43.852 "keyring_get_keys", 00:05:43.852 "framework_get_pci_devices", 00:05:43.852 "framework_get_config", 00:05:43.852 "framework_get_subsystems", 00:05:43.853 "fsdev_set_opts", 00:05:43.853 "fsdev_get_opts", 00:05:43.853 "trace_get_info", 00:05:43.853 "trace_get_tpoint_group_mask", 00:05:43.853 "trace_disable_tpoint_group", 00:05:43.853 "trace_enable_tpoint_group", 00:05:43.853 "trace_clear_tpoint_mask", 00:05:43.853 "trace_set_tpoint_mask", 00:05:43.853 "notify_get_notifications", 00:05:43.853 "notify_get_types", 00:05:43.853 "spdk_get_version", 00:05:43.853 "rpc_get_methods" 00:05:43.853 ] 00:05:43.853 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.853 12:52:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.853 12:52:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.111 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.111 12:52:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57790 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57790 ']' 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57790 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57790 00:05:44.111 killing process with pid 57790 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57790' 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57790 00:05:44.111 12:52:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57790 00:05:44.370 00:05:44.370 real 0m1.515s 00:05:44.370 user 0m2.634s 00:05:44.370 sys 0m0.490s 00:05:44.370 12:52:15 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.370 ************************************ 00:05:44.370 END TEST spdkcli_tcp 00:05:44.370 12:52:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.370 ************************************ 00:05:44.370 12:52:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.370 12:52:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.370 12:52:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.370 12:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.629 ************************************ 00:05:44.629 START TEST dpdk_mem_utility 00:05:44.629 ************************************ 00:05:44.629 12:52:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.629 * Looking for test storage... 00:05:44.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:44.629 12:52:15 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.629 12:52:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.629 12:52:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.629 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.629 12:52:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.630 --rc genhtml_branch_coverage=1 00:05:44.630 --rc genhtml_function_coverage=1 00:05:44.630 --rc genhtml_legend=1 00:05:44.630 --rc geninfo_all_blocks=1 00:05:44.630 --rc geninfo_unexecuted_blocks=1 00:05:44.630 00:05:44.630 ' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.630 --rc genhtml_branch_coverage=1 00:05:44.630 --rc genhtml_function_coverage=1 00:05:44.630 --rc genhtml_legend=1 00:05:44.630 --rc geninfo_all_blocks=1 00:05:44.630 --rc geninfo_unexecuted_blocks=1 00:05:44.630 00:05:44.630 ' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.630 --rc genhtml_branch_coverage=1 00:05:44.630 --rc genhtml_function_coverage=1 00:05:44.630 --rc genhtml_legend=1 00:05:44.630 --rc geninfo_all_blocks=1 00:05:44.630 --rc geninfo_unexecuted_blocks=1 00:05:44.630 00:05:44.630 ' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.630 --rc genhtml_branch_coverage=1 00:05:44.630 --rc genhtml_function_coverage=1 00:05:44.630 --rc genhtml_legend=1 00:05:44.630 --rc geninfo_all_blocks=1 00:05:44.630 --rc geninfo_unexecuted_blocks=1 00:05:44.630 00:05:44.630 ' 00:05:44.630 12:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.630 12:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57887 00:05:44.630 12:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.630 12:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57887 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57887 ']' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.630 12:52:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.889 [2024-11-29 12:52:16.147062] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:44.889 [2024-11-29 12:52:16.147186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57887 ] 00:05:44.889 [2024-11-29 12:52:16.291865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.889 [2024-11-29 12:52:16.349067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.149 [2024-11-29 12:52:16.424309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.718 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.718 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:45.718 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.718 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.718 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.718 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.718 { 00:05:45.718 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.718 } 00:05:45.718 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.718 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.718 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:45.718 1 heaps totaling size 818.000000 MiB 00:05:45.718 size: 818.000000 MiB heap id: 0 00:05:45.718 end heaps---------- 00:05:45.718 9 mempools totaling size 603.782043 MiB 00:05:45.718 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.718 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.718 size: 100.555481 MiB name: bdev_io_57887 00:05:45.718 size: 50.003479 MiB name: msgpool_57887 00:05:45.718 size: 36.509338 MiB name: fsdev_io_57887 00:05:45.718 size: 21.763794 MiB name: PDU_Pool 00:05:45.718 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.718 size: 4.133484 MiB name: evtpool_57887 00:05:45.718 size: 0.026123 MiB name: Session_Pool 00:05:45.718 end mempools------- 00:05:45.718 6 memzones totaling size 4.142822 MiB 00:05:45.718 size: 1.000366 MiB name: RG_ring_0_57887 00:05:45.718 size: 1.000366 MiB name: RG_ring_1_57887 00:05:45.718 size: 1.000366 MiB name: RG_ring_4_57887 00:05:45.718 size: 1.000366 MiB name: RG_ring_5_57887 00:05:45.718 size: 0.125366 MiB name: RG_ring_2_57887 00:05:45.718 size: 0.015991 MiB name: RG_ring_3_57887 00:05:45.718 end memzones------- 00:05:45.718 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.980 heap id: 0 total size: 818.000000 MiB number of busy elements: 315 number of free elements: 15 00:05:45.980 list of free elements. size: 10.802856 MiB 00:05:45.980 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:45.980 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:45.980 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:45.980 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:45.980 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:45.980 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:45.980 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:45.980 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:45.980 element at address: 0x20001ae00000 with size: 0.568054 MiB 00:05:45.980 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:45.980 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:45.980 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:45.980 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:45.980 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:45.980 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:45.980 list of standard malloc elements. size: 199.268250 MiB 00:05:45.980 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:45.980 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:45.980 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:45.980 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:45.980 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:45.980 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:45.980 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:45.980 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:45.980 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:45.980 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:45.980 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:45.980 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:45.980 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:45.980 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:45.981 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:45.981 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:45.982 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:45.982 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:45.982 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:45.982 list of memzone associated elements. size: 607.928894 MiB 00:05:45.982 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:45.982 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.982 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:45.982 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.982 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:45.982 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57887_0 00:05:45.982 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:45.982 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57887_0 00:05:45.982 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:45.982 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57887_0 00:05:45.982 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:45.982 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.982 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:45.982 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.982 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:45.982 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57887_0 00:05:45.982 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:45.982 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57887 00:05:45.982 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:45.982 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57887 00:05:45.982 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.982 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.982 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.982 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:45.982 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.982 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57887 00:05:45.982 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57887 00:05:45.982 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57887 00:05:45.982 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57887 00:05:45.982 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57887 00:05:45.982 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57887 00:05:45.982 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.982 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.982 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:45.982 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.982 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:45.982 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57887 00:05:45.982 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:45.982 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57887 00:05:45.982 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:45.982 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.982 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:45.982 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.982 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:45.982 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57887 00:05:45.982 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:45.982 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.982 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:45.982 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57887 00:05:45.983 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:45.983 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57887 00:05:45.983 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:45.983 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57887 00:05:45.983 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:45.983 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.983 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.983 12:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57887 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57887 ']' 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57887 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57887 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.983 killing process with pid 57887 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57887' 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57887 00:05:45.983 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57887 00:05:46.242 00:05:46.242 real 0m1.806s 00:05:46.242 user 0m1.922s 00:05:46.242 sys 0m0.470s 00:05:46.242 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.242 ************************************ 00:05:46.242 END TEST dpdk_mem_utility 00:05:46.242 ************************************ 00:05:46.242 12:52:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.242 12:52:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.242 12:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.242 12:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.242 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.242 ************************************ 00:05:46.242 START TEST event 00:05:46.242 ************************************ 00:05:46.242 12:52:17 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.501 * Looking for test storage... 00:05:46.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.502 12:52:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.502 12:52:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.502 12:52:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.502 12:52:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.502 12:52:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.502 12:52:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.502 12:52:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.502 12:52:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.502 12:52:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.502 12:52:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.502 12:52:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.502 12:52:17 event -- scripts/common.sh@344 -- # case "$op" in 00:05:46.502 12:52:17 event -- scripts/common.sh@345 -- # : 1 00:05:46.502 12:52:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.502 12:52:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.502 12:52:17 event -- scripts/common.sh@365 -- # decimal 1 00:05:46.502 12:52:17 event -- scripts/common.sh@353 -- # local d=1 00:05:46.502 12:52:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.502 12:52:17 event -- scripts/common.sh@355 -- # echo 1 00:05:46.502 12:52:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.502 12:52:17 event -- scripts/common.sh@366 -- # decimal 2 00:05:46.502 12:52:17 event -- scripts/common.sh@353 -- # local d=2 00:05:46.502 12:52:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.502 12:52:17 event -- scripts/common.sh@355 -- # echo 2 00:05:46.502 12:52:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.502 12:52:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.502 12:52:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.502 12:52:17 event -- scripts/common.sh@368 -- # return 0 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.502 --rc genhtml_branch_coverage=1 00:05:46.502 --rc genhtml_function_coverage=1 00:05:46.502 --rc genhtml_legend=1 00:05:46.502 --rc geninfo_all_blocks=1 00:05:46.502 --rc geninfo_unexecuted_blocks=1 00:05:46.502 00:05:46.502 ' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.502 --rc genhtml_branch_coverage=1 00:05:46.502 --rc genhtml_function_coverage=1 00:05:46.502 --rc genhtml_legend=1 00:05:46.502 --rc geninfo_all_blocks=1 00:05:46.502 --rc geninfo_unexecuted_blocks=1 00:05:46.502 00:05:46.502 ' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.502 --rc genhtml_branch_coverage=1 00:05:46.502 --rc genhtml_function_coverage=1 00:05:46.502 --rc genhtml_legend=1 00:05:46.502 --rc geninfo_all_blocks=1 00:05:46.502 --rc geninfo_unexecuted_blocks=1 00:05:46.502 00:05:46.502 ' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.502 --rc genhtml_branch_coverage=1 00:05:46.502 --rc genhtml_function_coverage=1 00:05:46.502 --rc genhtml_legend=1 00:05:46.502 --rc geninfo_all_blocks=1 00:05:46.502 --rc geninfo_unexecuted_blocks=1 00:05:46.502 00:05:46.502 ' 00:05:46.502 12:52:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.502 12:52:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.502 12:52:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.502 12:52:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.502 12:52:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 START TEST event_perf 00:05:46.502 ************************************ 00:05:46.502 12:52:17 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.502 Running I/O for 1 seconds...[2024-11-29 12:52:18.009571] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:46.502 [2024-11-29 12:52:18.009667] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57966 ] 00:05:46.761 [2024-11-29 12:52:18.157179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.761 [2024-11-29 12:52:18.217733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.761 [2024-11-29 12:52:18.217907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.761 Running I/O for 1 seconds...[2024-11-29 12:52:18.219101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.761 [2024-11-29 12:52:18.219123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.140 00:05:48.140 lcore 0: 111698 00:05:48.140 lcore 1: 111699 00:05:48.140 lcore 2: 111700 00:05:48.140 lcore 3: 111697 00:05:48.140 done. 00:05:48.140 00:05:48.140 real 0m1.273s 00:05:48.140 user 0m4.088s 00:05:48.140 sys 0m0.057s 00:05:48.140 12:52:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.140 ************************************ 00:05:48.140 END TEST event_perf 00:05:48.140 ************************************ 00:05:48.140 12:52:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.140 12:52:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.140 12:52:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:48.140 12:52:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.140 12:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.140 ************************************ 00:05:48.140 START TEST event_reactor 00:05:48.140 ************************************ 00:05:48.140 12:52:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.140 [2024-11-29 12:52:19.332552] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:48.140 [2024-11-29 12:52:19.332673] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:05:48.140 [2024-11-29 12:52:19.479372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.140 [2024-11-29 12:52:19.529909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.078 test_start 00:05:49.078 oneshot 00:05:49.078 tick 100 00:05:49.078 tick 100 00:05:49.078 tick 250 00:05:49.078 tick 100 00:05:49.078 tick 100 00:05:49.078 tick 100 00:05:49.078 tick 250 00:05:49.078 tick 500 00:05:49.078 tick 100 00:05:49.078 tick 100 00:05:49.079 tick 250 00:05:49.079 tick 100 00:05:49.079 tick 100 00:05:49.079 test_end 00:05:49.079 00:05:49.079 real 0m1.256s 00:05:49.079 user 0m1.105s 00:05:49.079 sys 0m0.046s 00:05:49.079 12:52:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.079 ************************************ 00:05:49.079 END TEST event_reactor 00:05:49.079 ************************************ 00:05:49.079 12:52:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 12:52:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.339 12:52:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.339 12:52:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.339 12:52:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 ************************************ 00:05:49.339 START TEST event_reactor_perf 00:05:49.339 ************************************ 00:05:49.339 12:52:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.339 [2024-11-29 12:52:20.644602] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:49.339 [2024-11-29 12:52:20.644706] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:05:49.339 [2024-11-29 12:52:20.785948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.339 [2024-11-29 12:52:20.838561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.716 test_start 00:05:50.716 test_end 00:05:50.716 Performance: 390468 events per second 00:05:50.716 00:05:50.716 real 0m1.255s 00:05:50.716 user 0m1.107s 00:05:50.716 sys 0m0.041s 00:05:50.716 12:52:21 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.716 12:52:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 END TEST event_reactor_perf 00:05:50.716 ************************************ 00:05:50.716 12:52:21 event -- event/event.sh@49 -- # uname -s 00:05:50.716 12:52:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.716 12:52:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.716 12:52:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.716 12:52:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.716 12:52:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 START TEST event_scheduler 00:05:50.716 ************************************ 00:05:50.716 12:52:21 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.716 * Looking for test storage... 00:05:50.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.716 12:52:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.716 --rc genhtml_branch_coverage=1 00:05:50.716 --rc genhtml_function_coverage=1 00:05:50.716 --rc genhtml_legend=1 00:05:50.716 --rc geninfo_all_blocks=1 00:05:50.716 --rc geninfo_unexecuted_blocks=1 00:05:50.716 00:05:50.716 ' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.716 --rc genhtml_branch_coverage=1 00:05:50.716 --rc genhtml_function_coverage=1 00:05:50.716 --rc genhtml_legend=1 00:05:50.716 --rc geninfo_all_blocks=1 00:05:50.716 --rc geninfo_unexecuted_blocks=1 00:05:50.716 00:05:50.716 ' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.716 --rc genhtml_branch_coverage=1 00:05:50.716 --rc genhtml_function_coverage=1 00:05:50.716 --rc genhtml_legend=1 00:05:50.716 --rc geninfo_all_blocks=1 00:05:50.716 --rc geninfo_unexecuted_blocks=1 00:05:50.716 00:05:50.716 ' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.716 --rc genhtml_branch_coverage=1 00:05:50.716 --rc genhtml_function_coverage=1 00:05:50.716 --rc genhtml_legend=1 00:05:50.716 --rc geninfo_all_blocks=1 00:05:50.716 --rc geninfo_unexecuted_blocks=1 00:05:50.716 00:05:50.716 ' 00:05:50.716 12:52:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.716 12:52:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58105 00:05:50.716 12:52:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.716 12:52:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58105 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58105 ']' 00:05:50.716 12:52:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.716 12:52:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 [2024-11-29 12:52:22.175670] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:50.716 [2024-11-29 12:52:22.175800] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:05:50.975 [2024-11-29 12:52:22.319204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.975 [2024-11-29 12:52:22.403820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.975 [2024-11-29 12:52:22.403986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.975 [2024-11-29 12:52:22.404907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.975 [2024-11-29 12:52:22.405096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:51.914 12:52:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.914 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.914 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.914 POWER: Cannot set governor of lcore 0 to performance 00:05:51.914 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.914 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.914 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.914 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.914 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:51.914 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:51.914 POWER: Unable to set Power Management Environment for lcore 0 00:05:51.914 [2024-11-29 12:52:23.158455] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:51.914 [2024-11-29 12:52:23.158470] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:51.914 [2024-11-29 12:52:23.158478] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.914 [2024-11-29 12:52:23.158490] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.914 [2024-11-29 12:52:23.158501] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.914 [2024-11-29 12:52:23.158507] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 [2024-11-29 12:52:23.234377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.914 [2024-11-29 12:52:23.285357] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 ************************************ 00:05:51.914 START TEST scheduler_create_thread 00:05:51.914 ************************************ 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 2 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 3 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 4 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 5 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 6 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 7 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 8 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 9 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 10 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.914 12:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.835 12:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.835 12:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.835 12:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.835 12:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.835 12:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.404 12:52:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.404 00:05:54.404 real 0m2.613s 00:05:54.404 user 0m0.017s 00:05:54.404 sys 0m0.007s 00:05:54.404 12:52:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.404 ************************************ 00:05:54.404 12:52:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.404 END TEST scheduler_create_thread 00:05:54.404 ************************************ 00:05:54.663 12:52:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.663 12:52:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58105 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58105 ']' 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58105 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58105 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:54.663 killing process with pid 58105 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58105' 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58105 00:05:54.663 12:52:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58105 00:05:54.922 [2024-11-29 12:52:26.390221] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:55.180 00:05:55.180 real 0m4.723s 00:05:55.180 user 0m8.829s 00:05:55.180 sys 0m0.432s 00:05:55.180 12:52:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.180 12:52:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.180 ************************************ 00:05:55.180 END TEST event_scheduler 00:05:55.180 ************************************ 00:05:55.439 12:52:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:55.439 12:52:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:55.439 12:52:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.439 12:52:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.439 12:52:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.439 ************************************ 00:05:55.439 START TEST app_repeat 00:05:55.439 ************************************ 00:05:55.439 12:52:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:55.439 12:52:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.439 12:52:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.439 12:52:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:55.439 12:52:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.439 12:52:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58204 00:05:55.440 Process app_repeat pid: 58204 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58204' 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:55.440 spdk_app_start Round 0 00:05:55.440 12:52:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58204 /var/tmp/spdk-nbd.sock 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58204 ']' 00:05:55.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.440 12:52:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.440 [2024-11-29 12:52:26.763931] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:05:55.440 [2024-11-29 12:52:26.764038] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58204 ] 00:05:55.440 [2024-11-29 12:52:26.915388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.698 [2024-11-29 12:52:26.985969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.698 [2024-11-29 12:52:26.985994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.698 [2024-11-29 12:52:27.045219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.698 12:52:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.699 12:52:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.699 12:52:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.956 Malloc0 00:05:55.956 12:52:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.216 Malloc1 00:05:56.216 12:52:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.216 12:52:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.475 /dev/nbd0 00:05:56.733 12:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.733 12:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.733 12:52:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.733 12:52:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.733 12:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.734 1+0 records in 00:05:56.734 1+0 records out 00:05:56.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307106 s, 13.3 MB/s 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.734 12:52:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.734 12:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.734 12:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.734 12:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.993 /dev/nbd1 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.993 1+0 records in 00:05:56.993 1+0 records out 00:05:56.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359209 s, 11.4 MB/s 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.993 12:52:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.993 12:52:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.252 { 00:05:57.252 "nbd_device": "/dev/nbd0", 00:05:57.252 "bdev_name": "Malloc0" 00:05:57.252 }, 00:05:57.252 { 00:05:57.252 "nbd_device": "/dev/nbd1", 00:05:57.252 "bdev_name": "Malloc1" 00:05:57.252 } 00:05:57.252 ]' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.252 { 00:05:57.252 "nbd_device": "/dev/nbd0", 00:05:57.252 "bdev_name": "Malloc0" 00:05:57.252 }, 00:05:57.252 { 00:05:57.252 "nbd_device": "/dev/nbd1", 00:05:57.252 "bdev_name": "Malloc1" 00:05:57.252 } 00:05:57.252 ]' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.252 /dev/nbd1' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.252 /dev/nbd1' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.252 12:52:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.253 256+0 records in 00:05:57.253 256+0 records out 00:05:57.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508777 s, 206 MB/s 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.253 256+0 records in 00:05:57.253 256+0 records out 00:05:57.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286371 s, 36.6 MB/s 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.253 256+0 records in 00:05:57.253 256+0 records out 00:05:57.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251895 s, 41.6 MB/s 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.253 12:52:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.512 12:52:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.770 12:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.028 12:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.286 12:52:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.286 12:52:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.891 12:52:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.891 [2024-11-29 12:52:30.356151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.170 [2024-11-29 12:52:30.432933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.170 [2024-11-29 12:52:30.432937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.170 [2024-11-29 12:52:30.505372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.170 [2024-11-29 12:52:30.505515] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.170 [2024-11-29 12:52:30.505531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.703 12:52:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.703 spdk_app_start Round 1 00:06:01.703 12:52:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.703 12:52:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58204 /var/tmp/spdk-nbd.sock 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58204 ']' 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.703 12:52:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.961 12:52:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.961 12:52:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.961 12:52:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.220 Malloc0 00:06:02.220 12:52:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.478 Malloc1 00:06:02.478 12:52:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.478 12:52:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.737 /dev/nbd0 00:06:02.737 12:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.737 12:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.737 1+0 records in 00:06:02.737 1+0 records out 00:06:02.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292741 s, 14.0 MB/s 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.737 12:52:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.737 12:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.737 12:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.737 12:52:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.996 /dev/nbd1 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.996 1+0 records in 00:06:02.996 1+0 records out 00:06:02.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191782 s, 21.4 MB/s 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.996 12:52:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.996 12:52:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.255 12:52:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.255 { 00:06:03.256 "nbd_device": "/dev/nbd0", 00:06:03.256 "bdev_name": "Malloc0" 00:06:03.256 }, 00:06:03.256 { 00:06:03.256 "nbd_device": "/dev/nbd1", 00:06:03.256 "bdev_name": "Malloc1" 00:06:03.256 } 00:06:03.256 ]' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.256 { 00:06:03.256 "nbd_device": "/dev/nbd0", 00:06:03.256 "bdev_name": "Malloc0" 00:06:03.256 }, 00:06:03.256 { 00:06:03.256 "nbd_device": "/dev/nbd1", 00:06:03.256 "bdev_name": "Malloc1" 00:06:03.256 } 00:06:03.256 ]' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.256 /dev/nbd1' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.256 /dev/nbd1' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.256 256+0 records in 00:06:03.256 256+0 records out 00:06:03.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667176 s, 157 MB/s 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.256 256+0 records in 00:06:03.256 256+0 records out 00:06:03.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256154 s, 40.9 MB/s 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.256 12:52:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.515 256+0 records in 00:06:03.515 256+0 records out 00:06:03.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246991 s, 42.5 MB/s 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.515 12:52:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.786 12:52:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.046 12:52:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.304 12:52:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.304 12:52:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.872 12:52:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.872 [2024-11-29 12:52:36.300804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.872 [2024-11-29 12:52:36.354031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.872 [2024-11-29 12:52:36.354059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.130 [2024-11-29 12:52:36.429328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.130 [2024-11-29 12:52:36.429444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.130 [2024-11-29 12:52:36.429460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.657 spdk_app_start Round 2 00:06:07.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.657 12:52:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.657 12:52:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.657 12:52:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58204 /var/tmp/spdk-nbd.sock 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58204 ']' 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.657 12:52:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.915 12:52:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.915 12:52:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.915 12:52:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.173 Malloc0 00:06:08.173 12:52:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.431 Malloc1 00:06:08.431 12:52:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.431 12:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.432 12:52:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.690 /dev/nbd0 00:06:08.690 12:52:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.690 12:52:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.690 1+0 records in 00:06:08.690 1+0 records out 00:06:08.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266807 s, 15.4 MB/s 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.690 12:52:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.690 12:52:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.690 12:52:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.690 12:52:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.949 /dev/nbd1 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.949 1+0 records in 00:06:08.949 1+0 records out 00:06:08.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269568 s, 15.2 MB/s 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.949 12:52:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.949 12:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.518 { 00:06:09.518 "nbd_device": "/dev/nbd0", 00:06:09.518 "bdev_name": "Malloc0" 00:06:09.518 }, 00:06:09.518 { 00:06:09.518 "nbd_device": "/dev/nbd1", 00:06:09.518 "bdev_name": "Malloc1" 00:06:09.518 } 00:06:09.518 ]' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.518 { 00:06:09.518 "nbd_device": "/dev/nbd0", 00:06:09.518 "bdev_name": "Malloc0" 00:06:09.518 }, 00:06:09.518 { 00:06:09.518 "nbd_device": "/dev/nbd1", 00:06:09.518 "bdev_name": "Malloc1" 00:06:09.518 } 00:06:09.518 ]' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.518 /dev/nbd1' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.518 /dev/nbd1' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.518 256+0 records in 00:06:09.518 256+0 records out 00:06:09.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00822036 s, 128 MB/s 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.518 256+0 records in 00:06:09.518 256+0 records out 00:06:09.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244284 s, 42.9 MB/s 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.518 256+0 records in 00:06:09.518 256+0 records out 00:06:09.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295082 s, 35.5 MB/s 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.518 12:52:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.777 12:52:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.036 12:52:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.295 12:52:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.295 12:52:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.554 12:52:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.812 [2024-11-29 12:52:42.190501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.812 [2024-11-29 12:52:42.228090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.812 [2024-11-29 12:52:42.228104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.813 [2024-11-29 12:52:42.298319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.813 [2024-11-29 12:52:42.298427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.813 [2024-11-29 12:52:42.298442] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.103 12:52:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58204 /var/tmp/spdk-nbd.sock 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58204 ']' 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.103 12:52:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.103 12:52:45 event.app_repeat -- event/event.sh@39 -- # killprocess 58204 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58204 ']' 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58204 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58204 00:06:14.103 killing process with pid 58204 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58204' 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58204 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58204 00:06:14.103 spdk_app_start is called in Round 0. 00:06:14.103 Shutdown signal received, stop current app iteration 00:06:14.103 Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 reinitialization... 00:06:14.103 spdk_app_start is called in Round 1. 00:06:14.103 Shutdown signal received, stop current app iteration 00:06:14.103 Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 reinitialization... 00:06:14.103 spdk_app_start is called in Round 2. 00:06:14.103 Shutdown signal received, stop current app iteration 00:06:14.103 Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 reinitialization... 00:06:14.103 spdk_app_start is called in Round 3. 00:06:14.103 Shutdown signal received, stop current app iteration 00:06:14.103 12:52:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.103 12:52:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:14.103 00:06:14.103 real 0m18.780s 00:06:14.103 user 0m42.430s 00:06:14.103 sys 0m2.885s 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.103 ************************************ 00:06:14.103 END TEST app_repeat 00:06:14.103 ************************************ 00:06:14.103 12:52:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.103 12:52:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.103 12:52:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.103 12:52:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.103 12:52:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.103 12:52:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.103 ************************************ 00:06:14.103 START TEST cpu_locks 00:06:14.103 ************************************ 00:06:14.103 12:52:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.361 * Looking for test storage... 00:06:14.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.361 12:52:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.361 12:52:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.361 12:52:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.361 12:52:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.361 12:52:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.362 12:52:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.362 --rc genhtml_branch_coverage=1 00:06:14.362 --rc genhtml_function_coverage=1 00:06:14.362 --rc genhtml_legend=1 00:06:14.362 --rc geninfo_all_blocks=1 00:06:14.362 --rc geninfo_unexecuted_blocks=1 00:06:14.362 00:06:14.362 ' 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.362 --rc genhtml_branch_coverage=1 00:06:14.362 --rc genhtml_function_coverage=1 00:06:14.362 --rc genhtml_legend=1 00:06:14.362 --rc geninfo_all_blocks=1 00:06:14.362 --rc geninfo_unexecuted_blocks=1 00:06:14.362 00:06:14.362 ' 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.362 --rc genhtml_branch_coverage=1 00:06:14.362 --rc genhtml_function_coverage=1 00:06:14.362 --rc genhtml_legend=1 00:06:14.362 --rc geninfo_all_blocks=1 00:06:14.362 --rc geninfo_unexecuted_blocks=1 00:06:14.362 00:06:14.362 ' 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.362 --rc genhtml_branch_coverage=1 00:06:14.362 --rc genhtml_function_coverage=1 00:06:14.362 --rc genhtml_legend=1 00:06:14.362 --rc geninfo_all_blocks=1 00:06:14.362 --rc geninfo_unexecuted_blocks=1 00:06:14.362 00:06:14.362 ' 00:06:14.362 12:52:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.362 12:52:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.362 12:52:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.362 12:52:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.362 12:52:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.362 ************************************ 00:06:14.362 START TEST default_locks 00:06:14.362 ************************************ 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58643 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58643 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58643 ']' 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.362 12:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.362 [2024-11-29 12:52:45.830076] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:14.362 [2024-11-29 12:52:45.830213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:06:14.620 [2024-11-29 12:52:45.982918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.620 [2024-11-29 12:52:46.043916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.620 [2024-11-29 12:52:46.121978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.558 12:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.558 12:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:15.558 12:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58643 00:06:15.558 12:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58643 00:06:15.558 12:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58643 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58643 ']' 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58643 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.558 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58643 00:06:15.817 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.817 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.817 killing process with pid 58643 00:06:15.817 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58643' 00:06:15.817 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58643 00:06:15.817 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58643 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58643 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58643 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58643 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58643 ']' 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58643) - No such process 00:06:16.076 ERROR: process (pid: 58643) is no longer running 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.076 00:06:16.076 real 0m1.714s 00:06:16.076 user 0m1.785s 00:06:16.076 sys 0m0.519s 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.076 12:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.076 ************************************ 00:06:16.076 END TEST default_locks 00:06:16.076 ************************************ 00:06:16.076 12:52:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.076 12:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.076 12:52:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.076 12:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.076 ************************************ 00:06:16.076 START TEST default_locks_via_rpc 00:06:16.076 ************************************ 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58689 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58689 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58689 ']' 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.076 12:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.076 [2024-11-29 12:52:47.586806] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:16.076 [2024-11-29 12:52:47.586915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58689 ] 00:06:16.336 [2024-11-29 12:52:47.728251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.336 [2024-11-29 12:52:47.781659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.594 [2024-11-29 12:52:47.855079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58689 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58689 00:06:17.161 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58689 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58689 ']' 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58689 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58689 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.421 killing process with pid 58689 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58689' 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58689 00:06:17.421 12:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58689 00:06:17.988 00:06:17.988 real 0m1.876s 00:06:17.988 user 0m1.953s 00:06:17.988 sys 0m0.528s 00:06:17.988 12:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.988 12:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.988 ************************************ 00:06:17.988 END TEST default_locks_via_rpc 00:06:17.988 ************************************ 00:06:17.988 12:52:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.988 12:52:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.988 12:52:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.988 12:52:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.988 ************************************ 00:06:17.988 START TEST non_locking_app_on_locked_coremask 00:06:17.988 ************************************ 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58740 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58740 /var/tmp/spdk.sock 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58740 ']' 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.988 12:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.246 [2024-11-29 12:52:49.529486] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:18.246 [2024-11-29 12:52:49.529587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58740 ] 00:06:18.246 [2024-11-29 12:52:49.676136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.246 [2024-11-29 12:52:49.735233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.504 [2024-11-29 12:52:49.828570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58756 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58756 /var/tmp/spdk2.sock 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58756 ']' 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.072 12:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 [2024-11-29 12:52:50.557187] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:19.072 [2024-11-29 12:52:50.557268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:06:19.330 [2024-11-29 12:52:50.708419] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.330 [2024-11-29 12:52:50.708452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.589 [2024-11-29 12:52:50.845646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.589 [2024-11-29 12:52:51.036580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.155 12:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.155 12:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.155 12:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58740 00:06:20.155 12:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58740 00:06:20.155 12:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58740 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58740 ']' 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58740 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58740 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.092 killing process with pid 58740 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58740' 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58740 00:06:21.092 12:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58740 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58756 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58756 ']' 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58756 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58756 00:06:22.467 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.468 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.468 killing process with pid 58756 00:06:22.468 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58756' 00:06:22.468 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58756 00:06:22.468 12:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58756 00:06:22.727 00:06:22.727 real 0m4.651s 00:06:22.727 user 0m5.003s 00:06:22.727 sys 0m1.319s 00:06:22.727 12:52:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.727 12:52:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.727 ************************************ 00:06:22.727 END TEST non_locking_app_on_locked_coremask 00:06:22.727 ************************************ 00:06:22.727 12:52:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.727 12:52:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.727 12:52:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.727 12:52:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.727 ************************************ 00:06:22.727 START TEST locking_app_on_unlocked_coremask 00:06:22.727 ************************************ 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58829 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58829 /var/tmp/spdk.sock 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58829 ']' 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.727 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.727 [2024-11-29 12:52:54.222210] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:22.727 [2024-11-29 12:52:54.222303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58829 ] 00:06:22.986 [2024-11-29 12:52:54.363055] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.986 [2024-11-29 12:52:54.363088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.986 [2024-11-29 12:52:54.424194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.245 [2024-11-29 12:52:54.513353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58837 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58837 /var/tmp/spdk2.sock 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.245 12:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.504 [2024-11-29 12:52:54.827821] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:23.504 [2024-11-29 12:52:54.827943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:06:23.504 [2024-11-29 12:52:54.981367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.763 [2024-11-29 12:52:55.112730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.021 [2024-11-29 12:52:55.298092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.588 12:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.588 12:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.588 12:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58837 00:06:24.588 12:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58837 00:06:24.588 12:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58829 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58829 ']' 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58829 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.156 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58829 00:06:25.416 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.416 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.416 killing process with pid 58829 00:06:25.416 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58829' 00:06:25.416 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58829 00:06:25.416 12:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58829 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58837 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58837 ']' 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58837 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58837 00:06:26.351 killing process with pid 58837 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58837' 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58837 00:06:26.351 12:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58837 00:06:26.919 ************************************ 00:06:26.919 END TEST locking_app_on_unlocked_coremask 00:06:26.919 ************************************ 00:06:26.919 00:06:26.919 real 0m4.110s 00:06:26.919 user 0m4.248s 00:06:26.919 sys 0m1.293s 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.919 12:52:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.919 12:52:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.919 12:52:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.919 12:52:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.919 ************************************ 00:06:26.919 START TEST locking_app_on_locked_coremask 00:06:26.919 ************************************ 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58910 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58910 /var/tmp/spdk.sock 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.919 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.919 [2024-11-29 12:52:58.397033] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:26.919 [2024-11-29 12:52:58.397299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58910 ] 00:06:27.177 [2024-11-29 12:52:58.543323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.177 [2024-11-29 12:52:58.596080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.177 [2024-11-29 12:52:58.683701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58918 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58918 /var/tmp/spdk2.sock 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58918 /var/tmp/spdk2.sock 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58918 /var/tmp/spdk2.sock 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58918 ']' 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.436 12:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.695 [2024-11-29 12:52:59.008066] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:27.695 [2024-11-29 12:52:59.008171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:06:27.695 [2024-11-29 12:52:59.169503] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58910 has claimed it. 00:06:27.695 [2024-11-29 12:52:59.169569] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.279 ERROR: process (pid: 58918) is no longer running 00:06:28.279 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58918) - No such process 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58910 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58910 00:06:28.279 12:52:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58910 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58910 ']' 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58910 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58910 00:06:28.860 killing process with pid 58910 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58910' 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58910 00:06:28.860 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58910 00:06:29.428 00:06:29.428 real 0m2.454s 00:06:29.428 user 0m2.635s 00:06:29.428 sys 0m0.702s 00:06:29.428 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.428 ************************************ 00:06:29.428 END TEST locking_app_on_locked_coremask 00:06:29.428 ************************************ 00:06:29.428 12:53:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 12:53:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.428 12:53:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.428 12:53:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.428 12:53:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 ************************************ 00:06:29.428 START TEST locking_overlapped_coremask 00:06:29.428 ************************************ 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58969 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58969 /var/tmp/spdk.sock 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58969 ']' 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.428 12:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 [2024-11-29 12:53:00.918791] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:29.428 [2024-11-29 12:53:00.919294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58969 ] 00:06:29.687 [2024-11-29 12:53:01.072459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.688 [2024-11-29 12:53:01.151600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.688 [2024-11-29 12:53:01.151708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.688 [2024-11-29 12:53:01.151716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.947 [2024-11-29 12:53:01.250581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58987 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58987 /var/tmp/spdk2.sock 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58987 /var/tmp/spdk2.sock 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:30.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58987 /var/tmp/spdk2.sock 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58987 ']' 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.515 12:53:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.515 [2024-11-29 12:53:02.024094] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:30.515 [2024-11-29 12:53:02.024205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:06:30.774 [2024-11-29 12:53:02.184172] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58969 has claimed it. 00:06:30.774 [2024-11-29 12:53:02.184239] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.341 ERROR: process (pid: 58987) is no longer running 00:06:31.341 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58987) - No such process 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58969 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58969 ']' 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58969 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58969 00:06:31.341 killing process with pid 58969 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58969' 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58969 00:06:31.341 12:53:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58969 00:06:31.909 ************************************ 00:06:31.909 END TEST locking_overlapped_coremask 00:06:31.909 ************************************ 00:06:31.909 00:06:31.909 real 0m2.347s 00:06:31.909 user 0m6.525s 00:06:31.909 sys 0m0.572s 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.909 12:53:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.909 12:53:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.909 12:53:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.909 12:53:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.909 ************************************ 00:06:31.909 START TEST locking_overlapped_coremask_via_rpc 00:06:31.909 ************************************ 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59033 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59033 /var/tmp/spdk.sock 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59033 ']' 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.909 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.909 [2024-11-29 12:53:03.318722] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:31.909 [2024-11-29 12:53:03.319117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:06:32.169 [2024-11-29 12:53:03.471762] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.169 [2024-11-29 12:53:03.471818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.169 [2024-11-29 12:53:03.534216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.169 [2024-11-29 12:53:03.534445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.169 [2024-11-29 12:53:03.534449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.169 [2024-11-29 12:53:03.614301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59043 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59043 /var/tmp/spdk2.sock 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59043 ']' 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.428 12:53:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.428 [2024-11-29 12:53:03.908972] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:32.428 [2024-11-29 12:53:03.909308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 00:06:32.687 [2024-11-29 12:53:04.066485] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.687 [2024-11-29 12:53:04.066577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.945 [2024-11-29 12:53:04.229456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.945 [2024-11-29 12:53:04.233001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.945 [2024-11-29 12:53:04.233003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.945 [2024-11-29 12:53:04.424083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.512 [2024-11-29 12:53:04.939134] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59033 has claimed it. 00:06:33.512 request: 00:06:33.512 { 00:06:33.512 "method": "framework_enable_cpumask_locks", 00:06:33.512 "req_id": 1 00:06:33.512 } 00:06:33.512 Got JSON-RPC error response 00:06:33.512 response: 00:06:33.512 { 00:06:33.512 "code": -32603, 00:06:33.512 "message": "Failed to claim CPU core: 2" 00:06:33.512 } 00:06:33.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59033 /var/tmp/spdk.sock 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59033 ']' 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.512 12:53:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59043 /var/tmp/spdk2.sock 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59043 ']' 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.782 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.046 ************************************ 00:06:34.046 END TEST locking_overlapped_coremask_via_rpc 00:06:34.046 ************************************ 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.046 00:06:34.046 real 0m2.306s 00:06:34.046 user 0m1.288s 00:06:34.046 sys 0m0.168s 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.046 12:53:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.304 12:53:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.304 12:53:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59033 ]] 00:06:34.304 12:53:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59033 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59033 ']' 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59033 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59033 00:06:34.304 killing process with pid 59033 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59033' 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59033 00:06:34.304 12:53:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59033 00:06:34.563 12:53:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59043 ]] 00:06:34.563 12:53:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59043 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59043 ']' 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59043 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59043 00:06:34.563 killing process with pid 59043 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59043' 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59043 00:06:34.563 12:53:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59043 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.130 Process with pid 59033 is not found 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59033 ]] 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59033 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59033 ']' 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59033 00:06:35.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59033) - No such process 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59033 is not found' 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59043 ]] 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59043 00:06:35.130 Process with pid 59043 is not found 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59043 ']' 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59043 00:06:35.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59043) - No such process 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59043 is not found' 00:06:35.130 12:53:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.130 00:06:35.130 real 0m21.020s 00:06:35.130 user 0m35.708s 00:06:35.130 sys 0m6.141s 00:06:35.130 ************************************ 00:06:35.130 END TEST cpu_locks 00:06:35.130 ************************************ 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.130 12:53:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.130 ************************************ 00:06:35.130 END TEST event 00:06:35.130 ************************************ 00:06:35.130 00:06:35.130 real 0m48.878s 00:06:35.130 user 1m33.507s 00:06:35.130 sys 0m9.918s 00:06:35.130 12:53:06 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.130 12:53:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.389 12:53:06 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:35.389 12:53:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.389 12:53:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.389 12:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:35.389 ************************************ 00:06:35.389 START TEST thread 00:06:35.389 ************************************ 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:35.389 * Looking for test storage... 00:06:35.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.389 12:53:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.389 12:53:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.389 12:53:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.389 12:53:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.389 12:53:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.389 12:53:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.389 12:53:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.389 12:53:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.389 12:53:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.389 12:53:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.389 12:53:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.389 12:53:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.389 12:53:06 thread -- scripts/common.sh@345 -- # : 1 00:06:35.389 12:53:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.389 12:53:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.389 12:53:06 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.389 12:53:06 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.389 12:53:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.389 12:53:06 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.389 12:53:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.389 12:53:06 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.389 12:53:06 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.389 12:53:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.389 12:53:06 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.389 12:53:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.389 12:53:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.389 12:53:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.389 12:53:06 thread -- scripts/common.sh@368 -- # return 0 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.389 --rc genhtml_branch_coverage=1 00:06:35.389 --rc genhtml_function_coverage=1 00:06:35.389 --rc genhtml_legend=1 00:06:35.389 --rc geninfo_all_blocks=1 00:06:35.389 --rc geninfo_unexecuted_blocks=1 00:06:35.389 00:06:35.389 ' 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.389 --rc genhtml_branch_coverage=1 00:06:35.389 --rc genhtml_function_coverage=1 00:06:35.389 --rc genhtml_legend=1 00:06:35.389 --rc geninfo_all_blocks=1 00:06:35.389 --rc geninfo_unexecuted_blocks=1 00:06:35.389 00:06:35.389 ' 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.389 --rc genhtml_branch_coverage=1 00:06:35.389 --rc genhtml_function_coverage=1 00:06:35.389 --rc genhtml_legend=1 00:06:35.389 --rc geninfo_all_blocks=1 00:06:35.389 --rc geninfo_unexecuted_blocks=1 00:06:35.389 00:06:35.389 ' 00:06:35.389 12:53:06 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.389 --rc genhtml_branch_coverage=1 00:06:35.389 --rc genhtml_function_coverage=1 00:06:35.389 --rc genhtml_legend=1 00:06:35.389 --rc geninfo_all_blocks=1 00:06:35.389 --rc geninfo_unexecuted_blocks=1 00:06:35.389 00:06:35.389 ' 00:06:35.390 12:53:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.390 12:53:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:35.390 12:53:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.390 12:53:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.390 ************************************ 00:06:35.390 START TEST thread_poller_perf 00:06:35.390 ************************************ 00:06:35.390 12:53:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.390 [2024-11-29 12:53:06.891188] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:35.390 [2024-11-29 12:53:06.891499] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:06:35.649 [2024-11-29 12:53:07.052761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.649 [2024-11-29 12:53:07.116392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.649 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.032 [2024-11-29T12:53:08.547Z] ====================================== 00:06:37.032 [2024-11-29T12:53:08.547Z] busy:2211392868 (cyc) 00:06:37.032 [2024-11-29T12:53:08.547Z] total_run_count: 384000 00:06:37.032 [2024-11-29T12:53:08.547Z] tsc_hz: 2200000000 (cyc) 00:06:37.032 [2024-11-29T12:53:08.547Z] ====================================== 00:06:37.032 [2024-11-29T12:53:08.547Z] poller_cost: 5758 (cyc), 2617 (nsec) 00:06:37.032 00:06:37.032 real 0m1.335s 00:06:37.032 ************************************ 00:06:37.032 END TEST thread_poller_perf 00:06:37.032 ************************************ 00:06:37.032 user 0m1.164s 00:06:37.032 sys 0m0.060s 00:06:37.032 12:53:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.032 12:53:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.032 12:53:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.032 12:53:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:37.032 12:53:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.032 12:53:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.032 ************************************ 00:06:37.032 START TEST thread_poller_perf 00:06:37.032 ************************************ 00:06:37.032 12:53:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.032 [2024-11-29 12:53:08.279641] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:37.032 [2024-11-29 12:53:08.279727] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:06:37.032 [2024-11-29 12:53:08.414021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.032 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.032 [2024-11-29 12:53:08.458790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.409 [2024-11-29T12:53:09.924Z] ====================================== 00:06:38.409 [2024-11-29T12:53:09.924Z] busy:2202356610 (cyc) 00:06:38.409 [2024-11-29T12:53:09.924Z] total_run_count: 4920000 00:06:38.409 [2024-11-29T12:53:09.924Z] tsc_hz: 2200000000 (cyc) 00:06:38.409 [2024-11-29T12:53:09.924Z] ====================================== 00:06:38.409 [2024-11-29T12:53:09.924Z] poller_cost: 447 (cyc), 203 (nsec) 00:06:38.409 00:06:38.409 real 0m1.279s 00:06:38.409 user 0m1.136s 00:06:38.409 sys 0m0.035s 00:06:38.409 12:53:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.409 12:53:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.409 ************************************ 00:06:38.409 END TEST thread_poller_perf 00:06:38.409 ************************************ 00:06:38.409 12:53:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.409 00:06:38.409 real 0m2.917s 00:06:38.409 user 0m2.453s 00:06:38.409 sys 0m0.240s 00:06:38.409 ************************************ 00:06:38.409 END TEST thread 00:06:38.409 ************************************ 00:06:38.409 12:53:09 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.409 12:53:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.409 12:53:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.409 12:53:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.409 12:53:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.409 12:53:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.409 12:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:38.409 ************************************ 00:06:38.409 START TEST app_cmdline 00:06:38.409 ************************************ 00:06:38.409 12:53:09 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.410 * Looking for test storage... 00:06:38.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.410 12:53:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.410 --rc genhtml_branch_coverage=1 00:06:38.410 --rc genhtml_function_coverage=1 00:06:38.410 --rc genhtml_legend=1 00:06:38.410 --rc geninfo_all_blocks=1 00:06:38.410 --rc geninfo_unexecuted_blocks=1 00:06:38.410 00:06:38.410 ' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.410 --rc genhtml_branch_coverage=1 00:06:38.410 --rc genhtml_function_coverage=1 00:06:38.410 --rc genhtml_legend=1 00:06:38.410 --rc geninfo_all_blocks=1 00:06:38.410 --rc geninfo_unexecuted_blocks=1 00:06:38.410 00:06:38.410 ' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.410 --rc genhtml_branch_coverage=1 00:06:38.410 --rc genhtml_function_coverage=1 00:06:38.410 --rc genhtml_legend=1 00:06:38.410 --rc geninfo_all_blocks=1 00:06:38.410 --rc geninfo_unexecuted_blocks=1 00:06:38.410 00:06:38.410 ' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.410 --rc genhtml_branch_coverage=1 00:06:38.410 --rc genhtml_function_coverage=1 00:06:38.410 --rc genhtml_legend=1 00:06:38.410 --rc geninfo_all_blocks=1 00:06:38.410 --rc geninfo_unexecuted_blocks=1 00:06:38.410 00:06:38.410 ' 00:06:38.410 12:53:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.410 12:53:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59292 00:06:38.410 12:53:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.410 12:53:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59292 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59292 ']' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.410 12:53:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.410 [2024-11-29 12:53:09.919199] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:38.410 [2024-11-29 12:53:09.919552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:06:38.669 [2024-11-29 12:53:10.060475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.669 [2024-11-29 12:53:10.119184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.928 [2024-11-29 12:53:10.207584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.497 12:53:10 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.497 12:53:10 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:39.497 12:53:10 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:39.756 { 00:06:39.756 "version": "SPDK v25.01-pre git sha1 89b293437", 00:06:39.756 "fields": { 00:06:39.756 "major": 25, 00:06:39.756 "minor": 1, 00:06:39.756 "patch": 0, 00:06:39.756 "suffix": "-pre", 00:06:39.756 "commit": "89b293437" 00:06:39.756 } 00:06:39.756 } 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.756 12:53:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.756 12:53:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.756 12:53:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:39.756 12:53:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.015 12:53:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.015 12:53:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.015 12:53:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:40.015 12:53:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.275 request: 00:06:40.275 { 00:06:40.275 "method": "env_dpdk_get_mem_stats", 00:06:40.275 "req_id": 1 00:06:40.275 } 00:06:40.275 Got JSON-RPC error response 00:06:40.275 response: 00:06:40.275 { 00:06:40.275 "code": -32601, 00:06:40.275 "message": "Method not found" 00:06:40.275 } 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.275 12:53:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59292 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59292 ']' 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59292 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59292 00:06:40.275 killing process with pid 59292 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59292' 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 59292 00:06:40.275 12:53:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 59292 00:06:40.844 ************************************ 00:06:40.844 END TEST app_cmdline 00:06:40.844 ************************************ 00:06:40.844 00:06:40.844 real 0m2.540s 00:06:40.844 user 0m3.110s 00:06:40.844 sys 0m0.604s 00:06:40.844 12:53:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.844 12:53:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.844 12:53:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.844 12:53:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.844 12:53:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.844 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:40.844 ************************************ 00:06:40.844 START TEST version 00:06:40.844 ************************************ 00:06:40.844 12:53:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.844 * Looking for test storage... 00:06:40.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.844 12:53:12 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.844 12:53:12 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.844 12:53:12 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.103 12:53:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.103 12:53:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.103 12:53:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.103 12:53:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.103 12:53:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.103 12:53:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.103 12:53:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.103 12:53:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.103 12:53:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.103 12:53:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.103 12:53:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.103 12:53:12 version -- scripts/common.sh@344 -- # case "$op" in 00:06:41.103 12:53:12 version -- scripts/common.sh@345 -- # : 1 00:06:41.103 12:53:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.103 12:53:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.103 12:53:12 version -- scripts/common.sh@365 -- # decimal 1 00:06:41.103 12:53:12 version -- scripts/common.sh@353 -- # local d=1 00:06:41.103 12:53:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.103 12:53:12 version -- scripts/common.sh@355 -- # echo 1 00:06:41.103 12:53:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.103 12:53:12 version -- scripts/common.sh@366 -- # decimal 2 00:06:41.103 12:53:12 version -- scripts/common.sh@353 -- # local d=2 00:06:41.103 12:53:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.103 12:53:12 version -- scripts/common.sh@355 -- # echo 2 00:06:41.103 12:53:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.103 12:53:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.103 12:53:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.103 12:53:12 version -- scripts/common.sh@368 -- # return 0 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.103 --rc genhtml_branch_coverage=1 00:06:41.103 --rc genhtml_function_coverage=1 00:06:41.103 --rc genhtml_legend=1 00:06:41.103 --rc geninfo_all_blocks=1 00:06:41.103 --rc geninfo_unexecuted_blocks=1 00:06:41.103 00:06:41.103 ' 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.103 --rc genhtml_branch_coverage=1 00:06:41.103 --rc genhtml_function_coverage=1 00:06:41.103 --rc genhtml_legend=1 00:06:41.103 --rc geninfo_all_blocks=1 00:06:41.103 --rc geninfo_unexecuted_blocks=1 00:06:41.103 00:06:41.103 ' 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.103 --rc genhtml_branch_coverage=1 00:06:41.103 --rc genhtml_function_coverage=1 00:06:41.103 --rc genhtml_legend=1 00:06:41.103 --rc geninfo_all_blocks=1 00:06:41.103 --rc geninfo_unexecuted_blocks=1 00:06:41.103 00:06:41.103 ' 00:06:41.103 12:53:12 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.103 --rc genhtml_branch_coverage=1 00:06:41.103 --rc genhtml_function_coverage=1 00:06:41.103 --rc genhtml_legend=1 00:06:41.103 --rc geninfo_all_blocks=1 00:06:41.103 --rc geninfo_unexecuted_blocks=1 00:06:41.103 00:06:41.103 ' 00:06:41.103 12:53:12 version -- app/version.sh@17 -- # get_header_version major 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # cut -f2 00:06:41.103 12:53:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.103 12:53:12 version -- app/version.sh@17 -- # major=25 00:06:41.103 12:53:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.103 12:53:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # cut -f2 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.103 12:53:12 version -- app/version.sh@18 -- # minor=1 00:06:41.103 12:53:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.103 12:53:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # cut -f2 00:06:41.103 12:53:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.103 12:53:12 version -- app/version.sh@19 -- # patch=0 00:06:41.104 12:53:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.104 12:53:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.104 12:53:12 version -- app/version.sh@14 -- # cut -f2 00:06:41.104 12:53:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.104 12:53:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.104 12:53:12 version -- app/version.sh@22 -- # version=25.1 00:06:41.104 12:53:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.104 12:53:12 version -- app/version.sh@28 -- # version=25.1rc0 00:06:41.104 12:53:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.104 12:53:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.104 12:53:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:41.104 12:53:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:41.104 00:06:41.104 real 0m0.263s 00:06:41.104 user 0m0.152s 00:06:41.104 sys 0m0.148s 00:06:41.104 ************************************ 00:06:41.104 END TEST version 00:06:41.104 ************************************ 00:06:41.104 12:53:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.104 12:53:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.104 12:53:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:41.104 12:53:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:41.104 12:53:12 -- spdk/autotest.sh@194 -- # uname -s 00:06:41.104 12:53:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:41.104 12:53:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:41.104 12:53:12 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:41.104 12:53:12 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:41.104 12:53:12 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.104 12:53:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.104 12:53:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.104 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.104 ************************************ 00:06:41.104 START TEST spdk_dd 00:06:41.104 ************************************ 00:06:41.104 12:53:12 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.363 * Looking for test storage... 00:06:41.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.363 12:53:12 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.363 12:53:12 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.363 12:53:12 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.363 12:53:12 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.363 12:53:12 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:41.364 12:53:12 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.364 12:53:12 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.364 --rc genhtml_branch_coverage=1 00:06:41.364 --rc genhtml_function_coverage=1 00:06:41.364 --rc genhtml_legend=1 00:06:41.364 --rc geninfo_all_blocks=1 00:06:41.364 --rc geninfo_unexecuted_blocks=1 00:06:41.364 00:06:41.364 ' 00:06:41.364 12:53:12 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.364 --rc genhtml_branch_coverage=1 00:06:41.364 --rc genhtml_function_coverage=1 00:06:41.364 --rc genhtml_legend=1 00:06:41.364 --rc geninfo_all_blocks=1 00:06:41.364 --rc geninfo_unexecuted_blocks=1 00:06:41.364 00:06:41.364 ' 00:06:41.364 12:53:12 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.364 --rc genhtml_branch_coverage=1 00:06:41.364 --rc genhtml_function_coverage=1 00:06:41.364 --rc genhtml_legend=1 00:06:41.364 --rc geninfo_all_blocks=1 00:06:41.364 --rc geninfo_unexecuted_blocks=1 00:06:41.364 00:06:41.364 ' 00:06:41.364 12:53:12 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.364 --rc genhtml_branch_coverage=1 00:06:41.364 --rc genhtml_function_coverage=1 00:06:41.364 --rc genhtml_legend=1 00:06:41.364 --rc geninfo_all_blocks=1 00:06:41.364 --rc geninfo_unexecuted_blocks=1 00:06:41.364 00:06:41.364 ' 00:06:41.364 12:53:12 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.364 12:53:12 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.364 12:53:12 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.364 12:53:12 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.364 12:53:12 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.364 12:53:12 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:41.364 12:53:12 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.364 12:53:12 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:41.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:41.623 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:41.623 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:41.884 12:53:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:41.884 12:53:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:41.884 12:53:13 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:41.884 12:53:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.884 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:41.885 * spdk_dd linked to liburing 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:41.885 12:53:13 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:41.885 12:53:13 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:41.886 12:53:13 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:41.886 12:53:13 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:41.886 12:53:13 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:41.886 12:53:13 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:41.886 12:53:13 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:41.886 12:53:13 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:41.886 12:53:13 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:41.886 12:53:13 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:41.886 12:53:13 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.886 12:53:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 ************************************ 00:06:41.886 START TEST spdk_dd_basic_rw 00:06:41.886 ************************************ 00:06:41.886 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:41.886 * Looking for test storage... 00:06:41.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.886 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.886 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.886 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.146 --rc genhtml_branch_coverage=1 00:06:42.146 --rc genhtml_function_coverage=1 00:06:42.146 --rc genhtml_legend=1 00:06:42.146 --rc geninfo_all_blocks=1 00:06:42.146 --rc geninfo_unexecuted_blocks=1 00:06:42.146 00:06:42.146 ' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.146 --rc genhtml_branch_coverage=1 00:06:42.146 --rc genhtml_function_coverage=1 00:06:42.146 --rc genhtml_legend=1 00:06:42.146 --rc geninfo_all_blocks=1 00:06:42.146 --rc geninfo_unexecuted_blocks=1 00:06:42.146 00:06:42.146 ' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.146 --rc genhtml_branch_coverage=1 00:06:42.146 --rc genhtml_function_coverage=1 00:06:42.146 --rc genhtml_legend=1 00:06:42.146 --rc geninfo_all_blocks=1 00:06:42.146 --rc geninfo_unexecuted_blocks=1 00:06:42.146 00:06:42.146 ' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.146 --rc genhtml_branch_coverage=1 00:06:42.146 --rc genhtml_function_coverage=1 00:06:42.146 --rc genhtml_legend=1 00:06:42.146 --rc geninfo_all_blocks=1 00:06:42.146 --rc geninfo_unexecuted_blocks=1 00:06:42.146 00:06:42.146 ' 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:42.146 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:42.147 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:42.408 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:42.408 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.409 ************************************ 00:06:42.409 START TEST dd_bs_lt_native_bs 00:06:42.409 ************************************ 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.409 12:53:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.409 { 00:06:42.409 "subsystems": [ 00:06:42.409 { 00:06:42.409 "subsystem": "bdev", 00:06:42.409 "config": [ 00:06:42.409 { 00:06:42.409 "params": { 00:06:42.409 "trtype": "pcie", 00:06:42.409 "traddr": "0000:00:10.0", 00:06:42.409 "name": "Nvme0" 00:06:42.409 }, 00:06:42.409 "method": "bdev_nvme_attach_controller" 00:06:42.409 }, 00:06:42.409 { 00:06:42.409 "method": "bdev_wait_for_examine" 00:06:42.409 } 00:06:42.409 ] 00:06:42.409 } 00:06:42.409 ] 00:06:42.409 } 00:06:42.409 [2024-11-29 12:53:13.735681] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:42.409 [2024-11-29 12:53:13.735925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:06:42.409 [2024-11-29 12:53:13.888098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.680 [2024-11-29 12:53:13.977781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.680 [2024-11-29 12:53:14.058865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.680 [2024-11-29 12:53:14.184684] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:42.680 [2024-11-29 12:53:14.184772] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.939 [2024-11-29 12:53:14.365767] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.198 00:06:43.198 real 0m0.775s 00:06:43.198 user 0m0.522s 00:06:43.198 sys 0m0.206s 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:43.198 ************************************ 00:06:43.198 END TEST dd_bs_lt_native_bs 00:06:43.198 ************************************ 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.198 ************************************ 00:06:43.198 START TEST dd_rw 00:06:43.198 ************************************ 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:43.198 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:43.199 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:43.199 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:43.199 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:43.199 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:43.199 12:53:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.765 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:43.765 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.765 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.765 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.765 [2024-11-29 12:53:15.183824] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:43.765 [2024-11-29 12:53:15.183970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:06:43.765 { 00:06:43.765 "subsystems": [ 00:06:43.765 { 00:06:43.765 "subsystem": "bdev", 00:06:43.765 "config": [ 00:06:43.765 { 00:06:43.765 "params": { 00:06:43.765 "trtype": "pcie", 00:06:43.765 "traddr": "0000:00:10.0", 00:06:43.765 "name": "Nvme0" 00:06:43.765 }, 00:06:43.765 "method": "bdev_nvme_attach_controller" 00:06:43.765 }, 00:06:43.765 { 00:06:43.765 "method": "bdev_wait_for_examine" 00:06:43.765 } 00:06:43.765 ] 00:06:43.765 } 00:06:43.765 ] 00:06:43.765 } 00:06:44.023 [2024-11-29 12:53:15.334835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.023 [2024-11-29 12:53:15.409126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.023 [2024-11-29 12:53:15.467928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.281  [2024-11-29T12:53:15.796Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:44.281 00:06:44.281 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:44.281 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.281 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.281 12:53:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.541 [2024-11-29 12:53:15.824247] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:44.541 [2024-11-29 12:53:15.824331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:06:44.541 { 00:06:44.541 "subsystems": [ 00:06:44.541 { 00:06:44.541 "subsystem": "bdev", 00:06:44.541 "config": [ 00:06:44.541 { 00:06:44.541 "params": { 00:06:44.541 "trtype": "pcie", 00:06:44.541 "traddr": "0000:00:10.0", 00:06:44.541 "name": "Nvme0" 00:06:44.541 }, 00:06:44.541 "method": "bdev_nvme_attach_controller" 00:06:44.541 }, 00:06:44.541 { 00:06:44.541 "method": "bdev_wait_for_examine" 00:06:44.541 } 00:06:44.541 ] 00:06:44.541 } 00:06:44.541 ] 00:06:44.541 } 00:06:44.541 [2024-11-29 12:53:15.963062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.541 [2024-11-29 12:53:16.024550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.799 [2024-11-29 12:53:16.085063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.799  [2024-11-29T12:53:16.572Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:45.057 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.058 12:53:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.058 { 00:06:45.058 "subsystems": [ 00:06:45.058 { 00:06:45.058 "subsystem": "bdev", 00:06:45.058 "config": [ 00:06:45.058 { 00:06:45.058 "params": { 00:06:45.058 "trtype": "pcie", 00:06:45.058 "traddr": "0000:00:10.0", 00:06:45.058 "name": "Nvme0" 00:06:45.058 }, 00:06:45.058 "method": "bdev_nvme_attach_controller" 00:06:45.058 }, 00:06:45.058 { 00:06:45.058 "method": "bdev_wait_for_examine" 00:06:45.058 } 00:06:45.058 ] 00:06:45.058 } 00:06:45.058 ] 00:06:45.058 } 00:06:45.058 [2024-11-29 12:53:16.474346] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:45.058 [2024-11-29 12:53:16.474449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59720 ] 00:06:45.317 [2024-11-29 12:53:16.620268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.317 [2024-11-29 12:53:16.678434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.317 [2024-11-29 12:53:16.731945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.576  [2024-11-29T12:53:17.091Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.576 00:06:45.576 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:45.576 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:45.576 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:45.576 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:45.576 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:45.577 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:45.577 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.143 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:46.143 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:46.143 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.143 12:53:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.143 [2024-11-29 12:53:17.639934] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:46.143 { 00:06:46.143 "subsystems": [ 00:06:46.143 { 00:06:46.143 "subsystem": "bdev", 00:06:46.143 "config": [ 00:06:46.143 { 00:06:46.143 "params": { 00:06:46.143 "trtype": "pcie", 00:06:46.143 "traddr": "0000:00:10.0", 00:06:46.143 "name": "Nvme0" 00:06:46.143 }, 00:06:46.143 "method": "bdev_nvme_attach_controller" 00:06:46.143 }, 00:06:46.143 { 00:06:46.143 "method": "bdev_wait_for_examine" 00:06:46.143 } 00:06:46.143 ] 00:06:46.143 } 00:06:46.143 ] 00:06:46.143 } 00:06:46.143 [2024-11-29 12:53:17.640427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:06:46.402 [2024-11-29 12:53:17.787270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.402 [2024-11-29 12:53:17.846796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.402 [2024-11-29 12:53:17.902144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.659  [2024-11-29T12:53:18.432Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:46.917 00:06:46.917 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:46.917 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:46.917 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.917 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.917 { 00:06:46.917 "subsystems": [ 00:06:46.917 { 00:06:46.917 "subsystem": "bdev", 00:06:46.917 "config": [ 00:06:46.917 { 00:06:46.918 "params": { 00:06:46.918 "trtype": "pcie", 00:06:46.918 "traddr": "0000:00:10.0", 00:06:46.918 "name": "Nvme0" 00:06:46.918 }, 00:06:46.918 "method": "bdev_nvme_attach_controller" 00:06:46.918 }, 00:06:46.918 { 00:06:46.918 "method": "bdev_wait_for_examine" 00:06:46.918 } 00:06:46.918 ] 00:06:46.918 } 00:06:46.918 ] 00:06:46.918 } 00:06:46.918 [2024-11-29 12:53:18.271806] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:46.918 [2024-11-29 12:53:18.271962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:06:46.918 [2024-11-29 12:53:18.421180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.176 [2024-11-29 12:53:18.468850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.176 [2024-11-29 12:53:18.521408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.176  [2024-11-29T12:53:18.950Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:47.435 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.435 12:53:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.435 { 00:06:47.435 "subsystems": [ 00:06:47.435 { 00:06:47.435 "subsystem": "bdev", 00:06:47.435 "config": [ 00:06:47.435 { 00:06:47.435 "params": { 00:06:47.435 "trtype": "pcie", 00:06:47.436 "traddr": "0000:00:10.0", 00:06:47.436 "name": "Nvme0" 00:06:47.436 }, 00:06:47.436 "method": "bdev_nvme_attach_controller" 00:06:47.436 }, 00:06:47.436 { 00:06:47.436 "method": "bdev_wait_for_examine" 00:06:47.436 } 00:06:47.436 ] 00:06:47.436 } 00:06:47.436 ] 00:06:47.436 } 00:06:47.436 [2024-11-29 12:53:18.900672] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:47.436 [2024-11-29 12:53:18.901058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59768 ] 00:06:47.695 [2024-11-29 12:53:19.047204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.695 [2024-11-29 12:53:19.092862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.695 [2024-11-29 12:53:19.146136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.954  [2024-11-29T12:53:19.469Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:47.954 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:47.954 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.522 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:48.522 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:48.523 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.523 12:53:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.523 { 00:06:48.523 "subsystems": [ 00:06:48.523 { 00:06:48.523 "subsystem": "bdev", 00:06:48.523 "config": [ 00:06:48.523 { 00:06:48.523 "params": { 00:06:48.523 "trtype": "pcie", 00:06:48.523 "traddr": "0000:00:10.0", 00:06:48.523 "name": "Nvme0" 00:06:48.523 }, 00:06:48.523 "method": "bdev_nvme_attach_controller" 00:06:48.523 }, 00:06:48.523 { 00:06:48.523 "method": "bdev_wait_for_examine" 00:06:48.523 } 00:06:48.523 ] 00:06:48.523 } 00:06:48.523 ] 00:06:48.523 } 00:06:48.782 [2024-11-29 12:53:20.041942] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:48.782 [2024-11-29 12:53:20.042082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:06:48.782 [2024-11-29 12:53:20.194440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.782 [2024-11-29 12:53:20.246238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.041 [2024-11-29 12:53:20.306120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.041  [2024-11-29T12:53:20.815Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:49.300 00:06:49.300 12:53:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:49.300 12:53:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:49.300 12:53:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.300 12:53:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.300 { 00:06:49.300 "subsystems": [ 00:06:49.300 { 00:06:49.300 "subsystem": "bdev", 00:06:49.300 "config": [ 00:06:49.300 { 00:06:49.300 "params": { 00:06:49.300 "trtype": "pcie", 00:06:49.300 "traddr": "0000:00:10.0", 00:06:49.300 "name": "Nvme0" 00:06:49.300 }, 00:06:49.300 "method": "bdev_nvme_attach_controller" 00:06:49.300 }, 00:06:49.300 { 00:06:49.300 "method": "bdev_wait_for_examine" 00:06:49.300 } 00:06:49.300 ] 00:06:49.300 } 00:06:49.300 ] 00:06:49.300 } 00:06:49.300 [2024-11-29 12:53:20.675164] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:49.300 [2024-11-29 12:53:20.675260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:06:49.570 [2024-11-29 12:53:20.821178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.570 [2024-11-29 12:53:20.868807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.570 [2024-11-29 12:53:20.923410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.570  [2024-11-29T12:53:21.346Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:49.831 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:49.831 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:49.832 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:49.832 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:49.832 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.832 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.832 { 00:06:49.832 "subsystems": [ 00:06:49.832 { 00:06:49.832 "subsystem": "bdev", 00:06:49.832 "config": [ 00:06:49.832 { 00:06:49.832 "params": { 00:06:49.832 "trtype": "pcie", 00:06:49.832 "traddr": "0000:00:10.0", 00:06:49.832 "name": "Nvme0" 00:06:49.832 }, 00:06:49.832 "method": "bdev_nvme_attach_controller" 00:06:49.832 }, 00:06:49.832 { 00:06:49.832 "method": "bdev_wait_for_examine" 00:06:49.832 } 00:06:49.832 ] 00:06:49.832 } 00:06:49.832 ] 00:06:49.832 } 00:06:49.832 [2024-11-29 12:53:21.299122] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:49.832 [2024-11-29 12:53:21.299219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:06:50.091 [2024-11-29 12:53:21.444427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.091 [2024-11-29 12:53:21.490698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.091 [2024-11-29 12:53:21.547407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.351  [2024-11-29T12:53:21.866Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:50.351 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:50.351 12:53:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.919 12:53:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:50.919 12:53:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:50.919 12:53:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.919 12:53:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.919 { 00:06:50.919 "subsystems": [ 00:06:50.919 { 00:06:50.919 "subsystem": "bdev", 00:06:50.919 "config": [ 00:06:50.919 { 00:06:50.919 "params": { 00:06:50.919 "trtype": "pcie", 00:06:50.919 "traddr": "0000:00:10.0", 00:06:50.919 "name": "Nvme0" 00:06:50.919 }, 00:06:50.919 "method": "bdev_nvme_attach_controller" 00:06:50.919 }, 00:06:50.919 { 00:06:50.919 "method": "bdev_wait_for_examine" 00:06:50.919 } 00:06:50.919 ] 00:06:50.919 } 00:06:50.919 ] 00:06:50.919 } 00:06:50.919 [2024-11-29 12:53:22.411239] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:50.920 [2024-11-29 12:53:22.411375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59835 ] 00:06:51.179 [2024-11-29 12:53:22.575111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.179 [2024-11-29 12:53:22.649114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.438 [2024-11-29 12:53:22.709179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.438  [2024-11-29T12:53:23.213Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:51.698 00:06:51.698 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:51.698 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:51.698 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.698 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.698 { 00:06:51.698 "subsystems": [ 00:06:51.698 { 00:06:51.698 "subsystem": "bdev", 00:06:51.698 "config": [ 00:06:51.698 { 00:06:51.698 "params": { 00:06:51.698 "trtype": "pcie", 00:06:51.698 "traddr": "0000:00:10.0", 00:06:51.698 "name": "Nvme0" 00:06:51.698 }, 00:06:51.698 "method": "bdev_nvme_attach_controller" 00:06:51.698 }, 00:06:51.698 { 00:06:51.698 "method": "bdev_wait_for_examine" 00:06:51.698 } 00:06:51.698 ] 00:06:51.698 } 00:06:51.698 ] 00:06:51.698 } 00:06:51.698 [2024-11-29 12:53:23.075615] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:51.698 [2024-11-29 12:53:23.075919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:06:51.956 [2024-11-29 12:53:23.223561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.956 [2024-11-29 12:53:23.281411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.956 [2024-11-29 12:53:23.336335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.957  [2024-11-29T12:53:23.730Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:52.215 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.215 12:53:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 { 00:06:52.215 "subsystems": [ 00:06:52.215 { 00:06:52.215 "subsystem": "bdev", 00:06:52.215 "config": [ 00:06:52.215 { 00:06:52.215 "params": { 00:06:52.215 "trtype": "pcie", 00:06:52.215 "traddr": "0000:00:10.0", 00:06:52.215 "name": "Nvme0" 00:06:52.215 }, 00:06:52.215 "method": "bdev_nvme_attach_controller" 00:06:52.215 }, 00:06:52.215 { 00:06:52.215 "method": "bdev_wait_for_examine" 00:06:52.215 } 00:06:52.215 ] 00:06:52.215 } 00:06:52.215 ] 00:06:52.215 } 00:06:52.215 [2024-11-29 12:53:23.722066] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:52.215 [2024-11-29 12:53:23.722165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:06:52.475 [2024-11-29 12:53:23.869800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.475 [2024-11-29 12:53:23.927421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.475 [2024-11-29 12:53:23.986770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.738  [2024-11-29T12:53:24.514Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:52.999 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:52.999 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.567 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:53.567 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.567 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.567 12:53:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.567 { 00:06:53.567 "subsystems": [ 00:06:53.567 { 00:06:53.567 "subsystem": "bdev", 00:06:53.567 "config": [ 00:06:53.567 { 00:06:53.567 "params": { 00:06:53.567 "trtype": "pcie", 00:06:53.567 "traddr": "0000:00:10.0", 00:06:53.567 "name": "Nvme0" 00:06:53.567 }, 00:06:53.567 "method": "bdev_nvme_attach_controller" 00:06:53.567 }, 00:06:53.567 { 00:06:53.567 "method": "bdev_wait_for_examine" 00:06:53.567 } 00:06:53.567 ] 00:06:53.567 } 00:06:53.567 ] 00:06:53.567 } 00:06:53.567 [2024-11-29 12:53:24.897316] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:53.567 [2024-11-29 12:53:24.897614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59894 ] 00:06:53.567 [2024-11-29 12:53:25.045939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.825 [2024-11-29 12:53:25.113160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.825 [2024-11-29 12:53:25.170097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.825  [2024-11-29T12:53:25.599Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:54.084 00:06:54.084 12:53:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:54.084 12:53:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.084 12:53:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.084 12:53:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.084 [2024-11-29 12:53:25.534854] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:54.084 [2024-11-29 12:53:25.534964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:06:54.084 { 00:06:54.084 "subsystems": [ 00:06:54.084 { 00:06:54.084 "subsystem": "bdev", 00:06:54.084 "config": [ 00:06:54.084 { 00:06:54.084 "params": { 00:06:54.084 "trtype": "pcie", 00:06:54.084 "traddr": "0000:00:10.0", 00:06:54.084 "name": "Nvme0" 00:06:54.084 }, 00:06:54.084 "method": "bdev_nvme_attach_controller" 00:06:54.084 }, 00:06:54.084 { 00:06:54.084 "method": "bdev_wait_for_examine" 00:06:54.084 } 00:06:54.084 ] 00:06:54.084 } 00:06:54.084 ] 00:06:54.084 } 00:06:54.343 [2024-11-29 12:53:25.680400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.343 [2024-11-29 12:53:25.742286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.343 [2024-11-29 12:53:25.797369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.601  [2024-11-29T12:53:26.116Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:54.601 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:54.601 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:54.861 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:54.861 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:54.861 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:54.861 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.861 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.861 [2024-11-29 12:53:26.174778] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:54.861 [2024-11-29 12:53:26.175300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:06:54.861 { 00:06:54.861 "subsystems": [ 00:06:54.861 { 00:06:54.861 "subsystem": "bdev", 00:06:54.861 "config": [ 00:06:54.861 { 00:06:54.861 "params": { 00:06:54.861 "trtype": "pcie", 00:06:54.861 "traddr": "0000:00:10.0", 00:06:54.861 "name": "Nvme0" 00:06:54.861 }, 00:06:54.861 "method": "bdev_nvme_attach_controller" 00:06:54.861 }, 00:06:54.861 { 00:06:54.861 "method": "bdev_wait_for_examine" 00:06:54.861 } 00:06:54.861 ] 00:06:54.861 } 00:06:54.861 ] 00:06:54.861 } 00:06:54.861 [2024-11-29 12:53:26.316780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.120 [2024-11-29 12:53:26.376279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.120 [2024-11-29 12:53:26.434345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.120  [2024-11-29T12:53:26.894Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.379 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:55.379 12:53:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.947 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:55.947 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:55.947 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.947 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.947 { 00:06:55.947 "subsystems": [ 00:06:55.947 { 00:06:55.947 "subsystem": "bdev", 00:06:55.947 "config": [ 00:06:55.947 { 00:06:55.947 "params": { 00:06:55.947 "trtype": "pcie", 00:06:55.947 "traddr": "0000:00:10.0", 00:06:55.947 "name": "Nvme0" 00:06:55.947 }, 00:06:55.947 "method": "bdev_nvme_attach_controller" 00:06:55.947 }, 00:06:55.947 { 00:06:55.947 "method": "bdev_wait_for_examine" 00:06:55.947 } 00:06:55.947 ] 00:06:55.947 } 00:06:55.947 ] 00:06:55.947 } 00:06:55.947 [2024-11-29 12:53:27.280813] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:55.947 [2024-11-29 12:53:27.281149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:06:55.947 [2024-11-29 12:53:27.423986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.206 [2024-11-29 12:53:27.490184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.206 [2024-11-29 12:53:27.545085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.206  [2024-11-29T12:53:28.007Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:56.492 00:06:56.492 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:56.492 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.492 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.492 12:53:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.492 [2024-11-29 12:53:27.909810] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:56.492 [2024-11-29 12:53:27.909913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:06:56.492 { 00:06:56.492 "subsystems": [ 00:06:56.492 { 00:06:56.492 "subsystem": "bdev", 00:06:56.492 "config": [ 00:06:56.492 { 00:06:56.492 "params": { 00:06:56.492 "trtype": "pcie", 00:06:56.492 "traddr": "0000:00:10.0", 00:06:56.492 "name": "Nvme0" 00:06:56.492 }, 00:06:56.492 "method": "bdev_nvme_attach_controller" 00:06:56.492 }, 00:06:56.492 { 00:06:56.492 "method": "bdev_wait_for_examine" 00:06:56.492 } 00:06:56.492 ] 00:06:56.492 } 00:06:56.492 ] 00:06:56.492 } 00:06:56.764 [2024-11-29 12:53:28.046732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.764 [2024-11-29 12:53:28.091473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.764 [2024-11-29 12:53:28.147390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.764  [2024-11-29T12:53:28.539Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:57.024 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.024 12:53:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.024 { 00:06:57.024 "subsystems": [ 00:06:57.024 { 00:06:57.024 "subsystem": "bdev", 00:06:57.024 "config": [ 00:06:57.024 { 00:06:57.024 "params": { 00:06:57.024 "trtype": "pcie", 00:06:57.024 "traddr": "0000:00:10.0", 00:06:57.024 "name": "Nvme0" 00:06:57.024 }, 00:06:57.024 "method": "bdev_nvme_attach_controller" 00:06:57.024 }, 00:06:57.024 { 00:06:57.024 "method": "bdev_wait_for_examine" 00:06:57.024 } 00:06:57.024 ] 00:06:57.024 } 00:06:57.024 ] 00:06:57.024 } 00:06:57.024 [2024-11-29 12:53:28.523489] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:57.024 [2024-11-29 12:53:28.523582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:06:57.284 [2024-11-29 12:53:28.664645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.284 [2024-11-29 12:53:28.718515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.284 [2024-11-29 12:53:28.774572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.543  [2024-11-29T12:53:29.317Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:57.802 00:06:57.802 00:06:57.802 real 0m14.580s 00:06:57.802 user 0m10.507s 00:06:57.802 sys 0m5.720s 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 ************************************ 00:06:57.802 END TEST dd_rw 00:06:57.802 ************************************ 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 ************************************ 00:06:57.802 START TEST dd_rw_offset 00:06:57.802 ************************************ 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:57.803 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=s75p6nmy7duf8kmdsnk5g4xhvhxcro704vkn0wp9wtz0lubjvibbmzgo9u13buhoktsmdnh0kz2f7bsknw18jju0s35i31fbauk6b85q6dl98zredxpesvv3b4t4oduf77gq6eg1r5aik1gm0qyddjnsv8tmsblqcjw8273fzybwj22ck05qf9g4nnrtu2wwb0eb6n4im0d48hmrd467lfryeuyqs4z9r54g7ewvvtyqt1r3yegjll0fvdsbtpnuk3vcrxnzctgo8esnmh4mbopgpnbjd0kfjjyn7s2euue85yjuu9q29x4slit7pop2c2g7v445zemf8kuf8ysidz9xeyrcjpfvuhwkzru7zsmhxcg6hsfvwn76jp5sm4hplfp8wduqk8f051snwuoz8pbolmjubrewx7othy01yaxcx2t0ihs2wjzzp1ack695fmcaox6l7e7wryz02rguly5sp0f7171ijdfv996oo42g3gzy47j4dabwimvs0px5p4323gfphfc7uj13sjunkl34u8pakowmq5tfklgjb0okgtw48g0wlyig4bhorbde04p28xmsljg9hgfcsdysqfkk19eh9ufb2lnazqfs8klf2wpfkur2qjvqk3g1oo2v425dlbm2nraajgr0s23geqcr4g7or7wgoe2xo8m1vk40jpbwhh77u85s9xjv56qwom7a7trt31f1m3o24pkc62mg54ftnc6bjgl1pyf8bi9nhiahqxrt83oiyvsoddkq5upc2y0doabgzkux41p6xssziwrh20rdyn03ffm4vy1uuumnng9zmguypx9x0g2pvlbbb2u646x0ntsunq3oxt5rin0orup410au4q8kus6f4qnbq945igzexj219sqtkzyd43cwxtgqbva71drcvjcm8747z3bpy6z2t24jykrekpkctm0n0n84z9qp12wchnuafzaky0jh00y3zej3n8v6losr4chvpsrpv2y1d2g9huii7n916qxzoxo767becddf5d96q40izlzoq8r15iqigzruk7o2du0vhe4z95ouucksooyb42srq7ojbkmpg8kjxzodzfa775thozt867kamjrm1yjblbfjgjjhun04v47qt0aybc7nfmrxw13fuqm6c6me6505ubtqy6cm93erpvc0565li6fe0d0dhgl88xb719zq4i7ttvpfyhgkz7xeneuannkqipn72llip6la7lc0v98lpgcpeph8x79ar1w4e858gs8ajwpiohmjen67r08dhqrmswouresd4lls7d9mjpqr83usikbybtszd80pwm5ktpaa3k8rgq2ep4v4zbgiur6rwahfdf732kir75qg32o0pekz2e2r9p48qnbcwf4hywwwwbd1xhsmy3ckx2qmy3pm96ixvvtn1ccgftwlq5qf9e5e5lunmcxx0y1deg1w2yiohii0livewx5iaa3fpstmmzab24g2ers3t6gijw7p8yfmdmx3kkb0vndv3oqbu7t5srjsl4dca1afim5f8duuxgi3rpqblyhfy7b6mu7pzv1gec7vxyyg43vrdwtu7p3xn6k8nxu60ukuvdhtbljq3icn8cgrit2kllj8nbylaz6y4d1iyqoe3qht95bfz5mh4j8rt0bvpuio98ehzcw0aei45q2xt9lb0230i1avzhhka7g80iixr42rwqqhcft6k5rsx6iwd0nyfv4nvrpfwrmwxtzoyvn36nog2qbsd3xq8k70ywxw37x5h80sgsvpvdxzx62p94tht9291qutss0pwrf9gzrc9wps2ol89cb8wvnm9dtb7i0porak9ru70x3qqo3dkgz7f30t04sjv1n4r33j3cy2vcuiw76ndsd8v5lqlzmf6hiai7jjwvvx8drga25jvb4jk96iv22i2pq0adb55fc3iswi0jg8x0cv4iclw8odkf8kd3g6shf2720oo4qc9par6x9bz31omf97zmfumh5s3q5yeajmryboygqd4csge1euzvb8t34p6k3akwxcdfgaojapqanpx6w7y0el9f6vy3b6rnzljk317d62qyhapynaoavilo8j9yjtwva9pwingc6jvxqs3b3kuq6a0smtwyflexlivqf0wd0woowk0ozwlcu06cuyablfrj3jwcfxrhrr541mxk6k5akc8kof4oeughyooy6r6iaa8yl73t0gf3veqaw97g3gvehqqcg4u5xiv5vc8y7tjaeu5skkpra7lf7fx0su2ryrm9zqjl7ro83ld5e70ifrpnluym6e4eev8hkia24ysxbve8hkaz1d7clvzwloja2sxp37s2eswfsg9y3k6v3585fsiebjrj3uu0mexk9l515mzz29lp9syy1pqystjns7dp7gqdlla7jxp367rk21e9zhc7otjbum6s04080oq7xk8567iaagijswc79efknzv35x01ydvsebftuzynn96zoo18219blthcxh1f2fz2zoj19ovi2ydzyjgb0uhx4qdpxanih9n4dfgxypav5xduagxpqzqs3teaqgugcxd1arhnm6qj8syl67f5te4aqyizzqhm3xc51704ea7wt4w130t7yxb22mass7totbk9grzxgwgzuzsxy21bhmxc76n3x1m2kxj0vhxd8scgzi1argdo8i67mofsvkc02d9pwebmnpv2ai24me8diubbnwmma5023eyjbk9ssm33hxtxy8hrzt22jh8bpkddnngp3vsof6qgsy45zmeenuan0jsakobjlxnwt13n17s3osjy9b3hmet4mjacjymdsag6n9eciq0mk1j70t3zj6598unrobtm59r6p8rnmokm98t3k10vjy1xv2fjvsrhepupaa0arm07cunjd83xujphu4ivlopqgw0eombq3yt92xzq9tpu5hz4kaw5pjn63xw96a3efdtxclmf7v24ly7c2mclk03qxh8sis2mh7lymy8km02uz0tzydshylk0dkgqnmws4nbyyieerzlgczfxc57vff74a7bj1c5u7lwkykger4jsjxvgq8bhhjeh8xa4qkva1zc6r0ircfu5pj0tivm3pic35nw256a0tb2k4fm4imstbm8tzci2eskdsuqld7ji4aomgugmkn7qsi5f8ld8trbr6m0j9c529l8duij15kyf45cb528c4qpwdpu51zihc3nqr8hbbkynd217s2qxlehb5weporv5ahpdimubujo1ho6v8u0mlpkbcxtvxjgdfme62gt7oqnd619vy5vfqy2uyna6729ga23yr6ws4ask86z75qozpyilgprbajhjcq2sj7pt339y8y546fk4odx9ewaffmajuj2im7mblmdjr9of8eir8whwpcalsg4solrnoh16w1vzrmixzlhj4gb488w1164g3t20t44hwnhl370xb1sq7ocles3tnauoubuzl3z40pocqjrtbtr72un141h72yjh1v3qx9skqpgwx8puxhb6l42ktnkr3bhwosf4fdvhs1wy0b0xyweaydqztkw6mch4aqetgyy423m27osok7oo47fb7wsa607qny5fzlnaau8vy16izlync9gsljm2og43pfxweb8iussqd61omof3ytwkaa80vcsprzfcq1ki3buqjhbuwnuuk7ij0pl5fbyz1qzn01aize6l32mdavj7vckbw8zftoxdhmeikk3th19x16roq6gnli020kjhahvil9evw062bb9tlx1k8iqkok3n7hqp7lzijoo342y9tqpbr7f2bexc50887xri4k968xx895kv6s6d98y5seh2cud24mou1lhy8r13ttvqpajpl74es347q8mrxxktt8ybu0y5lzxz9uh4921yf5nsn9ytdba03p2n80oa5lm00yxao4pubw33s9flxpj198u1683g994jm22bxoicjuhu55eh0v67pf2cdicl89q8r8fr4xame96sm6lquqh6hdtek0fo3hfcmap0xandozapdodyn8g8if80hdcuhtvkptto21o8xwefszg0we7183t6sg1x3vaxcl 00:06:57.803 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:57.803 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:57.803 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:57.803 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:57.803 { 00:06:57.803 "subsystems": [ 00:06:57.803 { 00:06:57.803 "subsystem": "bdev", 00:06:57.803 "config": [ 00:06:57.803 { 00:06:57.803 "params": { 00:06:57.803 "trtype": "pcie", 00:06:57.803 "traddr": "0000:00:10.0", 00:06:57.803 "name": "Nvme0" 00:06:57.803 }, 00:06:57.803 "method": "bdev_nvme_attach_controller" 00:06:57.803 }, 00:06:57.803 { 00:06:57.803 "method": "bdev_wait_for_examine" 00:06:57.803 } 00:06:57.803 ] 00:06:57.803 } 00:06:57.803 ] 00:06:57.803 } 00:06:57.803 [2024-11-29 12:53:29.260905] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:57.803 [2024-11-29 12:53:29.261014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60009 ] 00:06:58.062 [2024-11-29 12:53:29.405559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.062 [2024-11-29 12:53:29.453887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.062 [2024-11-29 12:53:29.512911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.320  [2024-11-29T12:53:29.835Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:58.320 00:06:58.320 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:58.320 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:58.320 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:58.320 12:53:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:58.579 { 00:06:58.579 "subsystems": [ 00:06:58.579 { 00:06:58.579 "subsystem": "bdev", 00:06:58.579 "config": [ 00:06:58.579 { 00:06:58.579 "params": { 00:06:58.579 "trtype": "pcie", 00:06:58.579 "traddr": "0000:00:10.0", 00:06:58.579 "name": "Nvme0" 00:06:58.579 }, 00:06:58.579 "method": "bdev_nvme_attach_controller" 00:06:58.579 }, 00:06:58.579 { 00:06:58.579 "method": "bdev_wait_for_examine" 00:06:58.579 } 00:06:58.579 ] 00:06:58.579 } 00:06:58.579 ] 00:06:58.579 } 00:06:58.579 [2024-11-29 12:53:29.882908] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:58.579 [2024-11-29 12:53:29.883004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60028 ] 00:06:58.579 [2024-11-29 12:53:30.026703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.579 [2024-11-29 12:53:30.076820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.838 [2024-11-29 12:53:30.132554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.838  [2024-11-29T12:53:30.612Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:59.097 00:06:59.097 12:53:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ s75p6nmy7duf8kmdsnk5g4xhvhxcro704vkn0wp9wtz0lubjvibbmzgo9u13buhoktsmdnh0kz2f7bsknw18jju0s35i31fbauk6b85q6dl98zredxpesvv3b4t4oduf77gq6eg1r5aik1gm0qyddjnsv8tmsblqcjw8273fzybwj22ck05qf9g4nnrtu2wwb0eb6n4im0d48hmrd467lfryeuyqs4z9r54g7ewvvtyqt1r3yegjll0fvdsbtpnuk3vcrxnzctgo8esnmh4mbopgpnbjd0kfjjyn7s2euue85yjuu9q29x4slit7pop2c2g7v445zemf8kuf8ysidz9xeyrcjpfvuhwkzru7zsmhxcg6hsfvwn76jp5sm4hplfp8wduqk8f051snwuoz8pbolmjubrewx7othy01yaxcx2t0ihs2wjzzp1ack695fmcaox6l7e7wryz02rguly5sp0f7171ijdfv996oo42g3gzy47j4dabwimvs0px5p4323gfphfc7uj13sjunkl34u8pakowmq5tfklgjb0okgtw48g0wlyig4bhorbde04p28xmsljg9hgfcsdysqfkk19eh9ufb2lnazqfs8klf2wpfkur2qjvqk3g1oo2v425dlbm2nraajgr0s23geqcr4g7or7wgoe2xo8m1vk40jpbwhh77u85s9xjv56qwom7a7trt31f1m3o24pkc62mg54ftnc6bjgl1pyf8bi9nhiahqxrt83oiyvsoddkq5upc2y0doabgzkux41p6xssziwrh20rdyn03ffm4vy1uuumnng9zmguypx9x0g2pvlbbb2u646x0ntsunq3oxt5rin0orup410au4q8kus6f4qnbq945igzexj219sqtkzyd43cwxtgqbva71drcvjcm8747z3bpy6z2t24jykrekpkctm0n0n84z9qp12wchnuafzaky0jh00y3zej3n8v6losr4chvpsrpv2y1d2g9huii7n916qxzoxo767becddf5d96q40izlzoq8r15iqigzruk7o2du0vhe4z95ouucksooyb42srq7ojbkmpg8kjxzodzfa775thozt867kamjrm1yjblbfjgjjhun04v47qt0aybc7nfmrxw13fuqm6c6me6505ubtqy6cm93erpvc0565li6fe0d0dhgl88xb719zq4i7ttvpfyhgkz7xeneuannkqipn72llip6la7lc0v98lpgcpeph8x79ar1w4e858gs8ajwpiohmjen67r08dhqrmswouresd4lls7d9mjpqr83usikbybtszd80pwm5ktpaa3k8rgq2ep4v4zbgiur6rwahfdf732kir75qg32o0pekz2e2r9p48qnbcwf4hywwwwbd1xhsmy3ckx2qmy3pm96ixvvtn1ccgftwlq5qf9e5e5lunmcxx0y1deg1w2yiohii0livewx5iaa3fpstmmzab24g2ers3t6gijw7p8yfmdmx3kkb0vndv3oqbu7t5srjsl4dca1afim5f8duuxgi3rpqblyhfy7b6mu7pzv1gec7vxyyg43vrdwtu7p3xn6k8nxu60ukuvdhtbljq3icn8cgrit2kllj8nbylaz6y4d1iyqoe3qht95bfz5mh4j8rt0bvpuio98ehzcw0aei45q2xt9lb0230i1avzhhka7g80iixr42rwqqhcft6k5rsx6iwd0nyfv4nvrpfwrmwxtzoyvn36nog2qbsd3xq8k70ywxw37x5h80sgsvpvdxzx62p94tht9291qutss0pwrf9gzrc9wps2ol89cb8wvnm9dtb7i0porak9ru70x3qqo3dkgz7f30t04sjv1n4r33j3cy2vcuiw76ndsd8v5lqlzmf6hiai7jjwvvx8drga25jvb4jk96iv22i2pq0adb55fc3iswi0jg8x0cv4iclw8odkf8kd3g6shf2720oo4qc9par6x9bz31omf97zmfumh5s3q5yeajmryboygqd4csge1euzvb8t34p6k3akwxcdfgaojapqanpx6w7y0el9f6vy3b6rnzljk317d62qyhapynaoavilo8j9yjtwva9pwingc6jvxqs3b3kuq6a0smtwyflexlivqf0wd0woowk0ozwlcu06cuyablfrj3jwcfxrhrr541mxk6k5akc8kof4oeughyooy6r6iaa8yl73t0gf3veqaw97g3gvehqqcg4u5xiv5vc8y7tjaeu5skkpra7lf7fx0su2ryrm9zqjl7ro83ld5e70ifrpnluym6e4eev8hkia24ysxbve8hkaz1d7clvzwloja2sxp37s2eswfsg9y3k6v3585fsiebjrj3uu0mexk9l515mzz29lp9syy1pqystjns7dp7gqdlla7jxp367rk21e9zhc7otjbum6s04080oq7xk8567iaagijswc79efknzv35x01ydvsebftuzynn96zoo18219blthcxh1f2fz2zoj19ovi2ydzyjgb0uhx4qdpxanih9n4dfgxypav5xduagxpqzqs3teaqgugcxd1arhnm6qj8syl67f5te4aqyizzqhm3xc51704ea7wt4w130t7yxb22mass7totbk9grzxgwgzuzsxy21bhmxc76n3x1m2kxj0vhxd8scgzi1argdo8i67mofsvkc02d9pwebmnpv2ai24me8diubbnwmma5023eyjbk9ssm33hxtxy8hrzt22jh8bpkddnngp3vsof6qgsy45zmeenuan0jsakobjlxnwt13n17s3osjy9b3hmet4mjacjymdsag6n9eciq0mk1j70t3zj6598unrobtm59r6p8rnmokm98t3k10vjy1xv2fjvsrhepupaa0arm07cunjd83xujphu4ivlopqgw0eombq3yt92xzq9tpu5hz4kaw5pjn63xw96a3efdtxclmf7v24ly7c2mclk03qxh8sis2mh7lymy8km02uz0tzydshylk0dkgqnmws4nbyyieerzlgczfxc57vff74a7bj1c5u7lwkykger4jsjxvgq8bhhjeh8xa4qkva1zc6r0ircfu5pj0tivm3pic35nw256a0tb2k4fm4imstbm8tzci2eskdsuqld7ji4aomgugmkn7qsi5f8ld8trbr6m0j9c529l8duij15kyf45cb528c4qpwdpu51zihc3nqr8hbbkynd217s2qxlehb5weporv5ahpdimubujo1ho6v8u0mlpkbcxtvxjgdfme62gt7oqnd619vy5vfqy2uyna6729ga23yr6ws4ask86z75qozpyilgprbajhjcq2sj7pt339y8y546fk4odx9ewaffmajuj2im7mblmdjr9of8eir8whwpcalsg4solrnoh16w1vzrmixzlhj4gb488w1164g3t20t44hwnhl370xb1sq7ocles3tnauoubuzl3z40pocqjrtbtr72un141h72yjh1v3qx9skqpgwx8puxhb6l42ktnkr3bhwosf4fdvhs1wy0b0xyweaydqztkw6mch4aqetgyy423m27osok7oo47fb7wsa607qny5fzlnaau8vy16izlync9gsljm2og43pfxweb8iussqd61omof3ytwkaa80vcsprzfcq1ki3buqjhbuwnuuk7ij0pl5fbyz1qzn01aize6l32mdavj7vckbw8zftoxdhmeikk3th19x16roq6gnli020kjhahvil9evw062bb9tlx1k8iqkok3n7hqp7lzijoo342y9tqpbr7f2bexc50887xri4k968xx895kv6s6d98y5seh2cud24mou1lhy8r13ttvqpajpl74es347q8mrxxktt8ybu0y5lzxz9uh4921yf5nsn9ytdba03p2n80oa5lm00yxao4pubw33s9flxpj198u1683g994jm22bxoicjuhu55eh0v67pf2cdicl89q8r8fr4xame96sm6lquqh6hdtek0fo3hfcmap0xandozapdodyn8g8if80hdcuhtvkptto21o8xwefszg0we7183t6sg1x3vaxcl == \s\7\5\p\6\n\m\y\7\d\u\f\8\k\m\d\s\n\k\5\g\4\x\h\v\h\x\c\r\o\7\0\4\v\k\n\0\w\p\9\w\t\z\0\l\u\b\j\v\i\b\b\m\z\g\o\9\u\1\3\b\u\h\o\k\t\s\m\d\n\h\0\k\z\2\f\7\b\s\k\n\w\1\8\j\j\u\0\s\3\5\i\3\1\f\b\a\u\k\6\b\8\5\q\6\d\l\9\8\z\r\e\d\x\p\e\s\v\v\3\b\4\t\4\o\d\u\f\7\7\g\q\6\e\g\1\r\5\a\i\k\1\g\m\0\q\y\d\d\j\n\s\v\8\t\m\s\b\l\q\c\j\w\8\2\7\3\f\z\y\b\w\j\2\2\c\k\0\5\q\f\9\g\4\n\n\r\t\u\2\w\w\b\0\e\b\6\n\4\i\m\0\d\4\8\h\m\r\d\4\6\7\l\f\r\y\e\u\y\q\s\4\z\9\r\5\4\g\7\e\w\v\v\t\y\q\t\1\r\3\y\e\g\j\l\l\0\f\v\d\s\b\t\p\n\u\k\3\v\c\r\x\n\z\c\t\g\o\8\e\s\n\m\h\4\m\b\o\p\g\p\n\b\j\d\0\k\f\j\j\y\n\7\s\2\e\u\u\e\8\5\y\j\u\u\9\q\2\9\x\4\s\l\i\t\7\p\o\p\2\c\2\g\7\v\4\4\5\z\e\m\f\8\k\u\f\8\y\s\i\d\z\9\x\e\y\r\c\j\p\f\v\u\h\w\k\z\r\u\7\z\s\m\h\x\c\g\6\h\s\f\v\w\n\7\6\j\p\5\s\m\4\h\p\l\f\p\8\w\d\u\q\k\8\f\0\5\1\s\n\w\u\o\z\8\p\b\o\l\m\j\u\b\r\e\w\x\7\o\t\h\y\0\1\y\a\x\c\x\2\t\0\i\h\s\2\w\j\z\z\p\1\a\c\k\6\9\5\f\m\c\a\o\x\6\l\7\e\7\w\r\y\z\0\2\r\g\u\l\y\5\s\p\0\f\7\1\7\1\i\j\d\f\v\9\9\6\o\o\4\2\g\3\g\z\y\4\7\j\4\d\a\b\w\i\m\v\s\0\p\x\5\p\4\3\2\3\g\f\p\h\f\c\7\u\j\1\3\s\j\u\n\k\l\3\4\u\8\p\a\k\o\w\m\q\5\t\f\k\l\g\j\b\0\o\k\g\t\w\4\8\g\0\w\l\y\i\g\4\b\h\o\r\b\d\e\0\4\p\2\8\x\m\s\l\j\g\9\h\g\f\c\s\d\y\s\q\f\k\k\1\9\e\h\9\u\f\b\2\l\n\a\z\q\f\s\8\k\l\f\2\w\p\f\k\u\r\2\q\j\v\q\k\3\g\1\o\o\2\v\4\2\5\d\l\b\m\2\n\r\a\a\j\g\r\0\s\2\3\g\e\q\c\r\4\g\7\o\r\7\w\g\o\e\2\x\o\8\m\1\v\k\4\0\j\p\b\w\h\h\7\7\u\8\5\s\9\x\j\v\5\6\q\w\o\m\7\a\7\t\r\t\3\1\f\1\m\3\o\2\4\p\k\c\6\2\m\g\5\4\f\t\n\c\6\b\j\g\l\1\p\y\f\8\b\i\9\n\h\i\a\h\q\x\r\t\8\3\o\i\y\v\s\o\d\d\k\q\5\u\p\c\2\y\0\d\o\a\b\g\z\k\u\x\4\1\p\6\x\s\s\z\i\w\r\h\2\0\r\d\y\n\0\3\f\f\m\4\v\y\1\u\u\u\m\n\n\g\9\z\m\g\u\y\p\x\9\x\0\g\2\p\v\l\b\b\b\2\u\6\4\6\x\0\n\t\s\u\n\q\3\o\x\t\5\r\i\n\0\o\r\u\p\4\1\0\a\u\4\q\8\k\u\s\6\f\4\q\n\b\q\9\4\5\i\g\z\e\x\j\2\1\9\s\q\t\k\z\y\d\4\3\c\w\x\t\g\q\b\v\a\7\1\d\r\c\v\j\c\m\8\7\4\7\z\3\b\p\y\6\z\2\t\2\4\j\y\k\r\e\k\p\k\c\t\m\0\n\0\n\8\4\z\9\q\p\1\2\w\c\h\n\u\a\f\z\a\k\y\0\j\h\0\0\y\3\z\e\j\3\n\8\v\6\l\o\s\r\4\c\h\v\p\s\r\p\v\2\y\1\d\2\g\9\h\u\i\i\7\n\9\1\6\q\x\z\o\x\o\7\6\7\b\e\c\d\d\f\5\d\9\6\q\4\0\i\z\l\z\o\q\8\r\1\5\i\q\i\g\z\r\u\k\7\o\2\d\u\0\v\h\e\4\z\9\5\o\u\u\c\k\s\o\o\y\b\4\2\s\r\q\7\o\j\b\k\m\p\g\8\k\j\x\z\o\d\z\f\a\7\7\5\t\h\o\z\t\8\6\7\k\a\m\j\r\m\1\y\j\b\l\b\f\j\g\j\j\h\u\n\0\4\v\4\7\q\t\0\a\y\b\c\7\n\f\m\r\x\w\1\3\f\u\q\m\6\c\6\m\e\6\5\0\5\u\b\t\q\y\6\c\m\9\3\e\r\p\v\c\0\5\6\5\l\i\6\f\e\0\d\0\d\h\g\l\8\8\x\b\7\1\9\z\q\4\i\7\t\t\v\p\f\y\h\g\k\z\7\x\e\n\e\u\a\n\n\k\q\i\p\n\7\2\l\l\i\p\6\l\a\7\l\c\0\v\9\8\l\p\g\c\p\e\p\h\8\x\7\9\a\r\1\w\4\e\8\5\8\g\s\8\a\j\w\p\i\o\h\m\j\e\n\6\7\r\0\8\d\h\q\r\m\s\w\o\u\r\e\s\d\4\l\l\s\7\d\9\m\j\p\q\r\8\3\u\s\i\k\b\y\b\t\s\z\d\8\0\p\w\m\5\k\t\p\a\a\3\k\8\r\g\q\2\e\p\4\v\4\z\b\g\i\u\r\6\r\w\a\h\f\d\f\7\3\2\k\i\r\7\5\q\g\3\2\o\0\p\e\k\z\2\e\2\r\9\p\4\8\q\n\b\c\w\f\4\h\y\w\w\w\w\b\d\1\x\h\s\m\y\3\c\k\x\2\q\m\y\3\p\m\9\6\i\x\v\v\t\n\1\c\c\g\f\t\w\l\q\5\q\f\9\e\5\e\5\l\u\n\m\c\x\x\0\y\1\d\e\g\1\w\2\y\i\o\h\i\i\0\l\i\v\e\w\x\5\i\a\a\3\f\p\s\t\m\m\z\a\b\2\4\g\2\e\r\s\3\t\6\g\i\j\w\7\p\8\y\f\m\d\m\x\3\k\k\b\0\v\n\d\v\3\o\q\b\u\7\t\5\s\r\j\s\l\4\d\c\a\1\a\f\i\m\5\f\8\d\u\u\x\g\i\3\r\p\q\b\l\y\h\f\y\7\b\6\m\u\7\p\z\v\1\g\e\c\7\v\x\y\y\g\4\3\v\r\d\w\t\u\7\p\3\x\n\6\k\8\n\x\u\6\0\u\k\u\v\d\h\t\b\l\j\q\3\i\c\n\8\c\g\r\i\t\2\k\l\l\j\8\n\b\y\l\a\z\6\y\4\d\1\i\y\q\o\e\3\q\h\t\9\5\b\f\z\5\m\h\4\j\8\r\t\0\b\v\p\u\i\o\9\8\e\h\z\c\w\0\a\e\i\4\5\q\2\x\t\9\l\b\0\2\3\0\i\1\a\v\z\h\h\k\a\7\g\8\0\i\i\x\r\4\2\r\w\q\q\h\c\f\t\6\k\5\r\s\x\6\i\w\d\0\n\y\f\v\4\n\v\r\p\f\w\r\m\w\x\t\z\o\y\v\n\3\6\n\o\g\2\q\b\s\d\3\x\q\8\k\7\0\y\w\x\w\3\7\x\5\h\8\0\s\g\s\v\p\v\d\x\z\x\6\2\p\9\4\t\h\t\9\2\9\1\q\u\t\s\s\0\p\w\r\f\9\g\z\r\c\9\w\p\s\2\o\l\8\9\c\b\8\w\v\n\m\9\d\t\b\7\i\0\p\o\r\a\k\9\r\u\7\0\x\3\q\q\o\3\d\k\g\z\7\f\3\0\t\0\4\s\j\v\1\n\4\r\3\3\j\3\c\y\2\v\c\u\i\w\7\6\n\d\s\d\8\v\5\l\q\l\z\m\f\6\h\i\a\i\7\j\j\w\v\v\x\8\d\r\g\a\2\5\j\v\b\4\j\k\9\6\i\v\2\2\i\2\p\q\0\a\d\b\5\5\f\c\3\i\s\w\i\0\j\g\8\x\0\c\v\4\i\c\l\w\8\o\d\k\f\8\k\d\3\g\6\s\h\f\2\7\2\0\o\o\4\q\c\9\p\a\r\6\x\9\b\z\3\1\o\m\f\9\7\z\m\f\u\m\h\5\s\3\q\5\y\e\a\j\m\r\y\b\o\y\g\q\d\4\c\s\g\e\1\e\u\z\v\b\8\t\3\4\p\6\k\3\a\k\w\x\c\d\f\g\a\o\j\a\p\q\a\n\p\x\6\w\7\y\0\e\l\9\f\6\v\y\3\b\6\r\n\z\l\j\k\3\1\7\d\6\2\q\y\h\a\p\y\n\a\o\a\v\i\l\o\8\j\9\y\j\t\w\v\a\9\p\w\i\n\g\c\6\j\v\x\q\s\3\b\3\k\u\q\6\a\0\s\m\t\w\y\f\l\e\x\l\i\v\q\f\0\w\d\0\w\o\o\w\k\0\o\z\w\l\c\u\0\6\c\u\y\a\b\l\f\r\j\3\j\w\c\f\x\r\h\r\r\5\4\1\m\x\k\6\k\5\a\k\c\8\k\o\f\4\o\e\u\g\h\y\o\o\y\6\r\6\i\a\a\8\y\l\7\3\t\0\g\f\3\v\e\q\a\w\9\7\g\3\g\v\e\h\q\q\c\g\4\u\5\x\i\v\5\v\c\8\y\7\t\j\a\e\u\5\s\k\k\p\r\a\7\l\f\7\f\x\0\s\u\2\r\y\r\m\9\z\q\j\l\7\r\o\8\3\l\d\5\e\7\0\i\f\r\p\n\l\u\y\m\6\e\4\e\e\v\8\h\k\i\a\2\4\y\s\x\b\v\e\8\h\k\a\z\1\d\7\c\l\v\z\w\l\o\j\a\2\s\x\p\3\7\s\2\e\s\w\f\s\g\9\y\3\k\6\v\3\5\8\5\f\s\i\e\b\j\r\j\3\u\u\0\m\e\x\k\9\l\5\1\5\m\z\z\2\9\l\p\9\s\y\y\1\p\q\y\s\t\j\n\s\7\d\p\7\g\q\d\l\l\a\7\j\x\p\3\6\7\r\k\2\1\e\9\z\h\c\7\o\t\j\b\u\m\6\s\0\4\0\8\0\o\q\7\x\k\8\5\6\7\i\a\a\g\i\j\s\w\c\7\9\e\f\k\n\z\v\3\5\x\0\1\y\d\v\s\e\b\f\t\u\z\y\n\n\9\6\z\o\o\1\8\2\1\9\b\l\t\h\c\x\h\1\f\2\f\z\2\z\o\j\1\9\o\v\i\2\y\d\z\y\j\g\b\0\u\h\x\4\q\d\p\x\a\n\i\h\9\n\4\d\f\g\x\y\p\a\v\5\x\d\u\a\g\x\p\q\z\q\s\3\t\e\a\q\g\u\g\c\x\d\1\a\r\h\n\m\6\q\j\8\s\y\l\6\7\f\5\t\e\4\a\q\y\i\z\z\q\h\m\3\x\c\5\1\7\0\4\e\a\7\w\t\4\w\1\3\0\t\7\y\x\b\2\2\m\a\s\s\7\t\o\t\b\k\9\g\r\z\x\g\w\g\z\u\z\s\x\y\2\1\b\h\m\x\c\7\6\n\3\x\1\m\2\k\x\j\0\v\h\x\d\8\s\c\g\z\i\1\a\r\g\d\o\8\i\6\7\m\o\f\s\v\k\c\0\2\d\9\p\w\e\b\m\n\p\v\2\a\i\2\4\m\e\8\d\i\u\b\b\n\w\m\m\a\5\0\2\3\e\y\j\b\k\9\s\s\m\3\3\h\x\t\x\y\8\h\r\z\t\2\2\j\h\8\b\p\k\d\d\n\n\g\p\3\v\s\o\f\6\q\g\s\y\4\5\z\m\e\e\n\u\a\n\0\j\s\a\k\o\b\j\l\x\n\w\t\1\3\n\1\7\s\3\o\s\j\y\9\b\3\h\m\e\t\4\m\j\a\c\j\y\m\d\s\a\g\6\n\9\e\c\i\q\0\m\k\1\j\7\0\t\3\z\j\6\5\9\8\u\n\r\o\b\t\m\5\9\r\6\p\8\r\n\m\o\k\m\9\8\t\3\k\1\0\v\j\y\1\x\v\2\f\j\v\s\r\h\e\p\u\p\a\a\0\a\r\m\0\7\c\u\n\j\d\8\3\x\u\j\p\h\u\4\i\v\l\o\p\q\g\w\0\e\o\m\b\q\3\y\t\9\2\x\z\q\9\t\p\u\5\h\z\4\k\a\w\5\p\j\n\6\3\x\w\9\6\a\3\e\f\d\t\x\c\l\m\f\7\v\2\4\l\y\7\c\2\m\c\l\k\0\3\q\x\h\8\s\i\s\2\m\h\7\l\y\m\y\8\k\m\0\2\u\z\0\t\z\y\d\s\h\y\l\k\0\d\k\g\q\n\m\w\s\4\n\b\y\y\i\e\e\r\z\l\g\c\z\f\x\c\5\7\v\f\f\7\4\a\7\b\j\1\c\5\u\7\l\w\k\y\k\g\e\r\4\j\s\j\x\v\g\q\8\b\h\h\j\e\h\8\x\a\4\q\k\v\a\1\z\c\6\r\0\i\r\c\f\u\5\p\j\0\t\i\v\m\3\p\i\c\3\5\n\w\2\5\6\a\0\t\b\2\k\4\f\m\4\i\m\s\t\b\m\8\t\z\c\i\2\e\s\k\d\s\u\q\l\d\7\j\i\4\a\o\m\g\u\g\m\k\n\7\q\s\i\5\f\8\l\d\8\t\r\b\r\6\m\0\j\9\c\5\2\9\l\8\d\u\i\j\1\5\k\y\f\4\5\c\b\5\2\8\c\4\q\p\w\d\p\u\5\1\z\i\h\c\3\n\q\r\8\h\b\b\k\y\n\d\2\1\7\s\2\q\x\l\e\h\b\5\w\e\p\o\r\v\5\a\h\p\d\i\m\u\b\u\j\o\1\h\o\6\v\8\u\0\m\l\p\k\b\c\x\t\v\x\j\g\d\f\m\e\6\2\g\t\7\o\q\n\d\6\1\9\v\y\5\v\f\q\y\2\u\y\n\a\6\7\2\9\g\a\2\3\y\r\6\w\s\4\a\s\k\8\6\z\7\5\q\o\z\p\y\i\l\g\p\r\b\a\j\h\j\c\q\2\s\j\7\p\t\3\3\9\y\8\y\5\4\6\f\k\4\o\d\x\9\e\w\a\f\f\m\a\j\u\j\2\i\m\7\m\b\l\m\d\j\r\9\o\f\8\e\i\r\8\w\h\w\p\c\a\l\s\g\4\s\o\l\r\n\o\h\1\6\w\1\v\z\r\m\i\x\z\l\h\j\4\g\b\4\8\8\w\1\1\6\4\g\3\t\2\0\t\4\4\h\w\n\h\l\3\7\0\x\b\1\s\q\7\o\c\l\e\s\3\t\n\a\u\o\u\b\u\z\l\3\z\4\0\p\o\c\q\j\r\t\b\t\r\7\2\u\n\1\4\1\h\7\2\y\j\h\1\v\3\q\x\9\s\k\q\p\g\w\x\8\p\u\x\h\b\6\l\4\2\k\t\n\k\r\3\b\h\w\o\s\f\4\f\d\v\h\s\1\w\y\0\b\0\x\y\w\e\a\y\d\q\z\t\k\w\6\m\c\h\4\a\q\e\t\g\y\y\4\2\3\m\2\7\o\s\o\k\7\o\o\4\7\f\b\7\w\s\a\6\0\7\q\n\y\5\f\z\l\n\a\a\u\8\v\y\1\6\i\z\l\y\n\c\9\g\s\l\j\m\2\o\g\4\3\p\f\x\w\e\b\8\i\u\s\s\q\d\6\1\o\m\o\f\3\y\t\w\k\a\a\8\0\v\c\s\p\r\z\f\c\q\1\k\i\3\b\u\q\j\h\b\u\w\n\u\u\k\7\i\j\0\p\l\5\f\b\y\z\1\q\z\n\0\1\a\i\z\e\6\l\3\2\m\d\a\v\j\7\v\c\k\b\w\8\z\f\t\o\x\d\h\m\e\i\k\k\3\t\h\1\9\x\1\6\r\o\q\6\g\n\l\i\0\2\0\k\j\h\a\h\v\i\l\9\e\v\w\0\6\2\b\b\9\t\l\x\1\k\8\i\q\k\o\k\3\n\7\h\q\p\7\l\z\i\j\o\o\3\4\2\y\9\t\q\p\b\r\7\f\2\b\e\x\c\5\0\8\8\7\x\r\i\4\k\9\6\8\x\x\8\9\5\k\v\6\s\6\d\9\8\y\5\s\e\h\2\c\u\d\2\4\m\o\u\1\l\h\y\8\r\1\3\t\t\v\q\p\a\j\p\l\7\4\e\s\3\4\7\q\8\m\r\x\x\k\t\t\8\y\b\u\0\y\5\l\z\x\z\9\u\h\4\9\2\1\y\f\5\n\s\n\9\y\t\d\b\a\0\3\p\2\n\8\0\o\a\5\l\m\0\0\y\x\a\o\4\p\u\b\w\3\3\s\9\f\l\x\p\j\1\9\8\u\1\6\8\3\g\9\9\4\j\m\2\2\b\x\o\i\c\j\u\h\u\5\5\e\h\0\v\6\7\p\f\2\c\d\i\c\l\8\9\q\8\r\8\f\r\4\x\a\m\e\9\6\s\m\6\l\q\u\q\h\6\h\d\t\e\k\0\f\o\3\h\f\c\m\a\p\0\x\a\n\d\o\z\a\p\d\o\d\y\n\8\g\8\i\f\8\0\h\d\c\u\h\t\v\k\p\t\t\o\2\1\o\8\x\w\e\f\s\z\g\0\w\e\7\1\8\3\t\6\s\g\1\x\3\v\a\x\c\l ]] 00:06:59.098 00:06:59.098 real 0m1.282s 00:06:59.098 user 0m0.852s 00:06:59.098 sys 0m0.627s 00:06:59.098 ************************************ 00:06:59.098 END TEST dd_rw_offset 00:06:59.098 ************************************ 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.098 12:53:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.098 { 00:06:59.098 "subsystems": [ 00:06:59.098 { 00:06:59.098 "subsystem": "bdev", 00:06:59.098 "config": [ 00:06:59.098 { 00:06:59.098 "params": { 00:06:59.098 "trtype": "pcie", 00:06:59.098 "traddr": "0000:00:10.0", 00:06:59.098 "name": "Nvme0" 00:06:59.098 }, 00:06:59.098 "method": "bdev_nvme_attach_controller" 00:06:59.098 }, 00:06:59.098 { 00:06:59.098 "method": "bdev_wait_for_examine" 00:06:59.098 } 00:06:59.098 ] 00:06:59.098 } 00:06:59.098 ] 00:06:59.098 } 00:06:59.098 [2024-11-29 12:53:30.538232] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:06:59.098 [2024-11-29 12:53:30.538347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:06:59.357 [2024-11-29 12:53:30.684158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.357 [2024-11-29 12:53:30.742177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.357 [2024-11-29 12:53:30.798188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.616  [2024-11-29T12:53:31.131Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:59.616 00:06:59.616 12:53:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.616 00:06:59.616 real 0m17.850s 00:06:59.616 user 0m12.531s 00:06:59.616 sys 0m7.095s 00:06:59.616 12:53:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.616 12:53:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.616 ************************************ 00:06:59.616 END TEST spdk_dd_basic_rw 00:06:59.616 ************************************ 00:06:59.875 12:53:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:59.875 12:53:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.875 12:53:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.875 12:53:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.875 ************************************ 00:06:59.875 START TEST spdk_dd_posix 00:06:59.875 ************************************ 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:59.875 * Looking for test storage... 00:06:59.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:59.875 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.876 --rc genhtml_branch_coverage=1 00:06:59.876 --rc genhtml_function_coverage=1 00:06:59.876 --rc genhtml_legend=1 00:06:59.876 --rc geninfo_all_blocks=1 00:06:59.876 --rc geninfo_unexecuted_blocks=1 00:06:59.876 00:06:59.876 ' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.876 --rc genhtml_branch_coverage=1 00:06:59.876 --rc genhtml_function_coverage=1 00:06:59.876 --rc genhtml_legend=1 00:06:59.876 --rc geninfo_all_blocks=1 00:06:59.876 --rc geninfo_unexecuted_blocks=1 00:06:59.876 00:06:59.876 ' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.876 --rc genhtml_branch_coverage=1 00:06:59.876 --rc genhtml_function_coverage=1 00:06:59.876 --rc genhtml_legend=1 00:06:59.876 --rc geninfo_all_blocks=1 00:06:59.876 --rc geninfo_unexecuted_blocks=1 00:06:59.876 00:06:59.876 ' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.876 --rc genhtml_branch_coverage=1 00:06:59.876 --rc genhtml_function_coverage=1 00:06:59.876 --rc genhtml_legend=1 00:06:59.876 --rc geninfo_all_blocks=1 00:06:59.876 --rc geninfo_unexecuted_blocks=1 00:06:59.876 00:06:59.876 ' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:59.876 * First test run, liburing in use 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.876 ************************************ 00:06:59.876 START TEST dd_flag_append 00:06:59.876 ************************************ 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=qkrlstbreu5jwljd6msuc6lhp6qwlp8b 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=s4gvr5uhduvrmbx5mb7eaxopn4ufkga1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s qkrlstbreu5jwljd6msuc6lhp6qwlp8b 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s s4gvr5uhduvrmbx5mb7eaxopn4ufkga1 00:06:59.876 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:00.135 [2024-11-29 12:53:31.440823] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:00.136 [2024-11-29 12:53:31.440947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60124 ] 00:07:00.136 [2024-11-29 12:53:31.588845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.136 [2024-11-29 12:53:31.646541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.394 [2024-11-29 12:53:31.704746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.394  [2024-11-29T12:53:32.168Z] Copying: 32/32 [B] (average 31 kBps) 00:07:00.653 00:07:00.653 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ s4gvr5uhduvrmbx5mb7eaxopn4ufkga1qkrlstbreu5jwljd6msuc6lhp6qwlp8b == \s\4\g\v\r\5\u\h\d\u\v\r\m\b\x\5\m\b\7\e\a\x\o\p\n\4\u\f\k\g\a\1\q\k\r\l\s\t\b\r\e\u\5\j\w\l\j\d\6\m\s\u\c\6\l\h\p\6\q\w\l\p\8\b ]] 00:07:00.653 00:07:00.653 real 0m0.605s 00:07:00.653 user 0m0.337s 00:07:00.653 sys 0m0.329s 00:07:00.653 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.653 ************************************ 00:07:00.653 END TEST dd_flag_append 00:07:00.653 ************************************ 00:07:00.653 12:53:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.653 ************************************ 00:07:00.653 START TEST dd_flag_directory 00:07:00.653 ************************************ 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.653 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.653 [2024-11-29 12:53:32.088742] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:00.653 [2024-11-29 12:53:32.088832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60159 ] 00:07:00.911 [2024-11-29 12:53:32.228248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.912 [2024-11-29 12:53:32.283886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.912 [2024-11-29 12:53:32.355836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.912 [2024-11-29 12:53:32.405075] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.912 [2024-11-29 12:53:32.405137] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.912 [2024-11-29 12:53:32.405157] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.170 [2024-11-29 12:53:32.582259] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.430 12:53:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.430 [2024-11-29 12:53:32.739432] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:01.430 [2024-11-29 12:53:32.739525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:07:01.430 [2024-11-29 12:53:32.882517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.690 [2024-11-29 12:53:32.959948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.690 [2024-11-29 12:53:33.040612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.690 [2024-11-29 12:53:33.094186] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.690 [2024-11-29 12:53:33.094251] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.690 [2024-11-29 12:53:33.094285] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.950 [2024-11-29 12:53:33.262912] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.950 00:07:01.950 real 0m1.326s 00:07:01.950 user 0m0.750s 00:07:01.950 sys 0m0.366s 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:01.950 ************************************ 00:07:01.950 END TEST dd_flag_directory 00:07:01.950 ************************************ 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:01.950 ************************************ 00:07:01.950 START TEST dd_flag_nofollow 00:07:01.950 ************************************ 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.950 12:53:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.209 [2024-11-29 12:53:33.494403] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:02.209 [2024-11-29 12:53:33.494524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:07:02.209 [2024-11-29 12:53:33.642315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.209 [2024-11-29 12:53:33.690360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.468 [2024-11-29 12:53:33.764812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.468 [2024-11-29 12:53:33.814341] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:02.468 [2024-11-29 12:53:33.814418] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:02.468 [2024-11-29 12:53:33.814454] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.727 [2024-11-29 12:53:33.987933] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.727 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.728 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.728 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.728 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.728 [2024-11-29 12:53:34.148362] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:02.728 [2024-11-29 12:53:34.148479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:07:02.986 [2024-11-29 12:53:34.292566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.986 [2024-11-29 12:53:34.346724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.986 [2024-11-29 12:53:34.420842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.986 [2024-11-29 12:53:34.469474] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.986 [2024-11-29 12:53:34.469562] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.986 [2024-11-29 12:53:34.469614] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.245 [2024-11-29 12:53:34.637647] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:03.245 12:53:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.505 [2024-11-29 12:53:34.796946] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:03.505 [2024-11-29 12:53:34.797041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:07:03.505 [2024-11-29 12:53:34.941579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.505 [2024-11-29 12:53:34.990064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.763 [2024-11-29 12:53:35.065407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.763  [2024-11-29T12:53:35.537Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.022 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 86rovhsxq3l3o01f5ifckfumm78fdfk8jscmjsul0sr99wq5dg2v4k6n88rltq1ydgex7y0293tbiypd5h6r2tg9zaffqq61e5i5r6saa11m0ffzt2j84v1ua4zkforqpbzomrfjq2yvx9mgoysesnddxgtlckwbdrxrmi99frqe7iqeg8uylu42jlanzjy9r4vem2mdxb5cy2amja8qre1lc9l8lhyo037qf4n8elhyp9gl76fs3r1i99awrnd6x3rppr2n6e51ze4dvze5ml8y7yhrs5cmeemmse0vblqptje94016be7ovgophhi5r7gwep007spckog6qjdaozpsp53bfg29pgovyhxuwwxrjrheoekhb1w1ryi3xzd7dvbhudyc44xzbmgsfx83lon0362mgfaf6pt5kyybzo008bp7bzvzhdhbz48l0u06yh2ix389ntcd4z6naqj1toskzv5sgaeerv1yon4gtgs8f06i6wtv7bmiouuolduy == \8\6\r\o\v\h\s\x\q\3\l\3\o\0\1\f\5\i\f\c\k\f\u\m\m\7\8\f\d\f\k\8\j\s\c\m\j\s\u\l\0\s\r\9\9\w\q\5\d\g\2\v\4\k\6\n\8\8\r\l\t\q\1\y\d\g\e\x\7\y\0\2\9\3\t\b\i\y\p\d\5\h\6\r\2\t\g\9\z\a\f\f\q\q\6\1\e\5\i\5\r\6\s\a\a\1\1\m\0\f\f\z\t\2\j\8\4\v\1\u\a\4\z\k\f\o\r\q\p\b\z\o\m\r\f\j\q\2\y\v\x\9\m\g\o\y\s\e\s\n\d\d\x\g\t\l\c\k\w\b\d\r\x\r\m\i\9\9\f\r\q\e\7\i\q\e\g\8\u\y\l\u\4\2\j\l\a\n\z\j\y\9\r\4\v\e\m\2\m\d\x\b\5\c\y\2\a\m\j\a\8\q\r\e\1\l\c\9\l\8\l\h\y\o\0\3\7\q\f\4\n\8\e\l\h\y\p\9\g\l\7\6\f\s\3\r\1\i\9\9\a\w\r\n\d\6\x\3\r\p\p\r\2\n\6\e\5\1\z\e\4\d\v\z\e\5\m\l\8\y\7\y\h\r\s\5\c\m\e\e\m\m\s\e\0\v\b\l\q\p\t\j\e\9\4\0\1\6\b\e\7\o\v\g\o\p\h\h\i\5\r\7\g\w\e\p\0\0\7\s\p\c\k\o\g\6\q\j\d\a\o\z\p\s\p\5\3\b\f\g\2\9\p\g\o\v\y\h\x\u\w\w\x\r\j\r\h\e\o\e\k\h\b\1\w\1\r\y\i\3\x\z\d\7\d\v\b\h\u\d\y\c\4\4\x\z\b\m\g\s\f\x\8\3\l\o\n\0\3\6\2\m\g\f\a\f\6\p\t\5\k\y\y\b\z\o\0\0\8\b\p\7\b\z\v\z\h\d\h\b\z\4\8\l\0\u\0\6\y\h\2\i\x\3\8\9\n\t\c\d\4\z\6\n\a\q\j\1\t\o\s\k\z\v\5\s\g\a\e\e\r\v\1\y\o\n\4\g\t\g\s\8\f\0\6\i\6\w\t\v\7\b\m\i\o\u\u\o\l\d\u\y ]] 00:07:04.022 00:07:04.022 real 0m1.950s 00:07:04.022 user 0m1.091s 00:07:04.022 sys 0m0.714s 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:04.022 ************************************ 00:07:04.022 END TEST dd_flag_nofollow 00:07:04.022 ************************************ 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.022 ************************************ 00:07:04.022 START TEST dd_flag_noatime 00:07:04.022 ************************************ 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732884815 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732884815 00:07:04.022 12:53:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:04.983 12:53:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.243 [2024-11-29 12:53:36.508657] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:05.243 [2024-11-29 12:53:36.508778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60262 ] 00:07:05.243 [2024-11-29 12:53:36.659529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.243 [2024-11-29 12:53:36.739214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.502 [2024-11-29 12:53:36.817745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.502  [2024-11-29T12:53:37.276Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.761 00:07:05.761 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.761 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732884815 )) 00:07:05.761 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.761 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732884815 )) 00:07:05.761 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.761 [2024-11-29 12:53:37.196043] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:05.761 [2024-11-29 12:53:37.196147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:07:06.020 [2024-11-29 12:53:37.345382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.020 [2024-11-29 12:53:37.418594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.020 [2024-11-29 12:53:37.495566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.278  [2024-11-29T12:53:37.793Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.278 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732884817 )) 00:07:06.538 00:07:06.538 real 0m2.373s 00:07:06.538 user 0m0.762s 00:07:06.538 sys 0m0.767s 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 ************************************ 00:07:06.538 END TEST dd_flag_noatime 00:07:06.538 ************************************ 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 ************************************ 00:07:06.538 START TEST dd_flags_misc 00:07:06.538 ************************************ 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.538 12:53:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:06.538 [2024-11-29 12:53:37.929171] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:06.538 [2024-11-29 12:53:37.929290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60311 ] 00:07:06.797 [2024-11-29 12:53:38.077081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.797 [2024-11-29 12:53:38.140635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.797 [2024-11-29 12:53:38.214013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.797  [2024-11-29T12:53:38.571Z] Copying: 512/512 [B] (average 500 kBps) 00:07:07.056 00:07:07.057 12:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9j4ynqy69gttlkomxsw4jx7mip2id5b19itowajoen6y6cyvi2teixvyj6trcxno5u33f4a1u29ztgl3vs3svbkv4jg068hqwe8olrlrlu1kipuqe2gkk4a4xog7k9li7kqvdte8xlch6ni64gp328oifr3q7oy6llmv9kfond51ry2qdie195atkwihqvx5gprtaz4aloeeo8iv73r1pfpami4o2yie53t0mz4klesrgi10qzlfnr0karzceedc7ktmk74dkaaz6x9yp60cs53rhtv8rgbhh8oshyktdwa1xjkk9t20w6rkhhtetincuvzbt54jwl6e64h9i803x95ba8rin8uvl8jxs6lntp35m4orpax8bs4c60mu5bjfmjfljlaxglr3x9sjb76776ootsln2k45xieuvrcmqvohx6j76nbx4ock0912prot10cddvxpjxzjnz1khxueg888skttjug9iw476iw1vuoslolwa1xre55hmp77gf3w == \9\j\4\y\n\q\y\6\9\g\t\t\l\k\o\m\x\s\w\4\j\x\7\m\i\p\2\i\d\5\b\1\9\i\t\o\w\a\j\o\e\n\6\y\6\c\y\v\i\2\t\e\i\x\v\y\j\6\t\r\c\x\n\o\5\u\3\3\f\4\a\1\u\2\9\z\t\g\l\3\v\s\3\s\v\b\k\v\4\j\g\0\6\8\h\q\w\e\8\o\l\r\l\r\l\u\1\k\i\p\u\q\e\2\g\k\k\4\a\4\x\o\g\7\k\9\l\i\7\k\q\v\d\t\e\8\x\l\c\h\6\n\i\6\4\g\p\3\2\8\o\i\f\r\3\q\7\o\y\6\l\l\m\v\9\k\f\o\n\d\5\1\r\y\2\q\d\i\e\1\9\5\a\t\k\w\i\h\q\v\x\5\g\p\r\t\a\z\4\a\l\o\e\e\o\8\i\v\7\3\r\1\p\f\p\a\m\i\4\o\2\y\i\e\5\3\t\0\m\z\4\k\l\e\s\r\g\i\1\0\q\z\l\f\n\r\0\k\a\r\z\c\e\e\d\c\7\k\t\m\k\7\4\d\k\a\a\z\6\x\9\y\p\6\0\c\s\5\3\r\h\t\v\8\r\g\b\h\h\8\o\s\h\y\k\t\d\w\a\1\x\j\k\k\9\t\2\0\w\6\r\k\h\h\t\e\t\i\n\c\u\v\z\b\t\5\4\j\w\l\6\e\6\4\h\9\i\8\0\3\x\9\5\b\a\8\r\i\n\8\u\v\l\8\j\x\s\6\l\n\t\p\3\5\m\4\o\r\p\a\x\8\b\s\4\c\6\0\m\u\5\b\j\f\m\j\f\l\j\l\a\x\g\l\r\3\x\9\s\j\b\7\6\7\7\6\o\o\t\s\l\n\2\k\4\5\x\i\e\u\v\r\c\m\q\v\o\h\x\6\j\7\6\n\b\x\4\o\c\k\0\9\1\2\p\r\o\t\1\0\c\d\d\v\x\p\j\x\z\j\n\z\1\k\h\x\u\e\g\8\8\8\s\k\t\t\j\u\g\9\i\w\4\7\6\i\w\1\v\u\o\s\l\o\l\w\a\1\x\r\e\5\5\h\m\p\7\7\g\f\3\w ]] 00:07:07.057 12:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.057 12:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:07.316 [2024-11-29 12:53:38.569063] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:07.316 [2024-11-29 12:53:38.569166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60321 ] 00:07:07.316 [2024-11-29 12:53:38.716753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.316 [2024-11-29 12:53:38.777872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.575 [2024-11-29 12:53:38.851485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.575  [2024-11-29T12:53:39.349Z] Copying: 512/512 [B] (average 500 kBps) 00:07:07.834 00:07:07.834 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9j4ynqy69gttlkomxsw4jx7mip2id5b19itowajoen6y6cyvi2teixvyj6trcxno5u33f4a1u29ztgl3vs3svbkv4jg068hqwe8olrlrlu1kipuqe2gkk4a4xog7k9li7kqvdte8xlch6ni64gp328oifr3q7oy6llmv9kfond51ry2qdie195atkwihqvx5gprtaz4aloeeo8iv73r1pfpami4o2yie53t0mz4klesrgi10qzlfnr0karzceedc7ktmk74dkaaz6x9yp60cs53rhtv8rgbhh8oshyktdwa1xjkk9t20w6rkhhtetincuvzbt54jwl6e64h9i803x95ba8rin8uvl8jxs6lntp35m4orpax8bs4c60mu5bjfmjfljlaxglr3x9sjb76776ootsln2k45xieuvrcmqvohx6j76nbx4ock0912prot10cddvxpjxzjnz1khxueg888skttjug9iw476iw1vuoslolwa1xre55hmp77gf3w == \9\j\4\y\n\q\y\6\9\g\t\t\l\k\o\m\x\s\w\4\j\x\7\m\i\p\2\i\d\5\b\1\9\i\t\o\w\a\j\o\e\n\6\y\6\c\y\v\i\2\t\e\i\x\v\y\j\6\t\r\c\x\n\o\5\u\3\3\f\4\a\1\u\2\9\z\t\g\l\3\v\s\3\s\v\b\k\v\4\j\g\0\6\8\h\q\w\e\8\o\l\r\l\r\l\u\1\k\i\p\u\q\e\2\g\k\k\4\a\4\x\o\g\7\k\9\l\i\7\k\q\v\d\t\e\8\x\l\c\h\6\n\i\6\4\g\p\3\2\8\o\i\f\r\3\q\7\o\y\6\l\l\m\v\9\k\f\o\n\d\5\1\r\y\2\q\d\i\e\1\9\5\a\t\k\w\i\h\q\v\x\5\g\p\r\t\a\z\4\a\l\o\e\e\o\8\i\v\7\3\r\1\p\f\p\a\m\i\4\o\2\y\i\e\5\3\t\0\m\z\4\k\l\e\s\r\g\i\1\0\q\z\l\f\n\r\0\k\a\r\z\c\e\e\d\c\7\k\t\m\k\7\4\d\k\a\a\z\6\x\9\y\p\6\0\c\s\5\3\r\h\t\v\8\r\g\b\h\h\8\o\s\h\y\k\t\d\w\a\1\x\j\k\k\9\t\2\0\w\6\r\k\h\h\t\e\t\i\n\c\u\v\z\b\t\5\4\j\w\l\6\e\6\4\h\9\i\8\0\3\x\9\5\b\a\8\r\i\n\8\u\v\l\8\j\x\s\6\l\n\t\p\3\5\m\4\o\r\p\a\x\8\b\s\4\c\6\0\m\u\5\b\j\f\m\j\f\l\j\l\a\x\g\l\r\3\x\9\s\j\b\7\6\7\7\6\o\o\t\s\l\n\2\k\4\5\x\i\e\u\v\r\c\m\q\v\o\h\x\6\j\7\6\n\b\x\4\o\c\k\0\9\1\2\p\r\o\t\1\0\c\d\d\v\x\p\j\x\z\j\n\z\1\k\h\x\u\e\g\8\8\8\s\k\t\t\j\u\g\9\i\w\4\7\6\i\w\1\v\u\o\s\l\o\l\w\a\1\x\r\e\5\5\h\m\p\7\7\g\f\3\w ]] 00:07:07.834 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.834 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:07.834 [2024-11-29 12:53:39.193160] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:07.834 [2024-11-29 12:53:39.193246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60331 ] 00:07:07.834 [2024-11-29 12:53:39.333317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.095 [2024-11-29 12:53:39.401825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.095 [2024-11-29 12:53:39.479154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.095  [2024-11-29T12:53:39.870Z] Copying: 512/512 [B] (average 166 kBps) 00:07:08.355 00:07:08.355 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9j4ynqy69gttlkomxsw4jx7mip2id5b19itowajoen6y6cyvi2teixvyj6trcxno5u33f4a1u29ztgl3vs3svbkv4jg068hqwe8olrlrlu1kipuqe2gkk4a4xog7k9li7kqvdte8xlch6ni64gp328oifr3q7oy6llmv9kfond51ry2qdie195atkwihqvx5gprtaz4aloeeo8iv73r1pfpami4o2yie53t0mz4klesrgi10qzlfnr0karzceedc7ktmk74dkaaz6x9yp60cs53rhtv8rgbhh8oshyktdwa1xjkk9t20w6rkhhtetincuvzbt54jwl6e64h9i803x95ba8rin8uvl8jxs6lntp35m4orpax8bs4c60mu5bjfmjfljlaxglr3x9sjb76776ootsln2k45xieuvrcmqvohx6j76nbx4ock0912prot10cddvxpjxzjnz1khxueg888skttjug9iw476iw1vuoslolwa1xre55hmp77gf3w == \9\j\4\y\n\q\y\6\9\g\t\t\l\k\o\m\x\s\w\4\j\x\7\m\i\p\2\i\d\5\b\1\9\i\t\o\w\a\j\o\e\n\6\y\6\c\y\v\i\2\t\e\i\x\v\y\j\6\t\r\c\x\n\o\5\u\3\3\f\4\a\1\u\2\9\z\t\g\l\3\v\s\3\s\v\b\k\v\4\j\g\0\6\8\h\q\w\e\8\o\l\r\l\r\l\u\1\k\i\p\u\q\e\2\g\k\k\4\a\4\x\o\g\7\k\9\l\i\7\k\q\v\d\t\e\8\x\l\c\h\6\n\i\6\4\g\p\3\2\8\o\i\f\r\3\q\7\o\y\6\l\l\m\v\9\k\f\o\n\d\5\1\r\y\2\q\d\i\e\1\9\5\a\t\k\w\i\h\q\v\x\5\g\p\r\t\a\z\4\a\l\o\e\e\o\8\i\v\7\3\r\1\p\f\p\a\m\i\4\o\2\y\i\e\5\3\t\0\m\z\4\k\l\e\s\r\g\i\1\0\q\z\l\f\n\r\0\k\a\r\z\c\e\e\d\c\7\k\t\m\k\7\4\d\k\a\a\z\6\x\9\y\p\6\0\c\s\5\3\r\h\t\v\8\r\g\b\h\h\8\o\s\h\y\k\t\d\w\a\1\x\j\k\k\9\t\2\0\w\6\r\k\h\h\t\e\t\i\n\c\u\v\z\b\t\5\4\j\w\l\6\e\6\4\h\9\i\8\0\3\x\9\5\b\a\8\r\i\n\8\u\v\l\8\j\x\s\6\l\n\t\p\3\5\m\4\o\r\p\a\x\8\b\s\4\c\6\0\m\u\5\b\j\f\m\j\f\l\j\l\a\x\g\l\r\3\x\9\s\j\b\7\6\7\7\6\o\o\t\s\l\n\2\k\4\5\x\i\e\u\v\r\c\m\q\v\o\h\x\6\j\7\6\n\b\x\4\o\c\k\0\9\1\2\p\r\o\t\1\0\c\d\d\v\x\p\j\x\z\j\n\z\1\k\h\x\u\e\g\8\8\8\s\k\t\t\j\u\g\9\i\w\4\7\6\i\w\1\v\u\o\s\l\o\l\w\a\1\x\r\e\5\5\h\m\p\7\7\g\f\3\w ]] 00:07:08.355 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.355 12:53:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:08.355 [2024-11-29 12:53:39.831053] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:08.355 [2024-11-29 12:53:39.831145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 00:07:08.614 [2024-11-29 12:53:39.983233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.614 [2024-11-29 12:53:40.049462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.614 [2024-11-29 12:53:40.111305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.873  [2024-11-29T12:53:40.388Z] Copying: 512/512 [B] (average 250 kBps) 00:07:08.873 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9j4ynqy69gttlkomxsw4jx7mip2id5b19itowajoen6y6cyvi2teixvyj6trcxno5u33f4a1u29ztgl3vs3svbkv4jg068hqwe8olrlrlu1kipuqe2gkk4a4xog7k9li7kqvdte8xlch6ni64gp328oifr3q7oy6llmv9kfond51ry2qdie195atkwihqvx5gprtaz4aloeeo8iv73r1pfpami4o2yie53t0mz4klesrgi10qzlfnr0karzceedc7ktmk74dkaaz6x9yp60cs53rhtv8rgbhh8oshyktdwa1xjkk9t20w6rkhhtetincuvzbt54jwl6e64h9i803x95ba8rin8uvl8jxs6lntp35m4orpax8bs4c60mu5bjfmjfljlaxglr3x9sjb76776ootsln2k45xieuvrcmqvohx6j76nbx4ock0912prot10cddvxpjxzjnz1khxueg888skttjug9iw476iw1vuoslolwa1xre55hmp77gf3w == \9\j\4\y\n\q\y\6\9\g\t\t\l\k\o\m\x\s\w\4\j\x\7\m\i\p\2\i\d\5\b\1\9\i\t\o\w\a\j\o\e\n\6\y\6\c\y\v\i\2\t\e\i\x\v\y\j\6\t\r\c\x\n\o\5\u\3\3\f\4\a\1\u\2\9\z\t\g\l\3\v\s\3\s\v\b\k\v\4\j\g\0\6\8\h\q\w\e\8\o\l\r\l\r\l\u\1\k\i\p\u\q\e\2\g\k\k\4\a\4\x\o\g\7\k\9\l\i\7\k\q\v\d\t\e\8\x\l\c\h\6\n\i\6\4\g\p\3\2\8\o\i\f\r\3\q\7\o\y\6\l\l\m\v\9\k\f\o\n\d\5\1\r\y\2\q\d\i\e\1\9\5\a\t\k\w\i\h\q\v\x\5\g\p\r\t\a\z\4\a\l\o\e\e\o\8\i\v\7\3\r\1\p\f\p\a\m\i\4\o\2\y\i\e\5\3\t\0\m\z\4\k\l\e\s\r\g\i\1\0\q\z\l\f\n\r\0\k\a\r\z\c\e\e\d\c\7\k\t\m\k\7\4\d\k\a\a\z\6\x\9\y\p\6\0\c\s\5\3\r\h\t\v\8\r\g\b\h\h\8\o\s\h\y\k\t\d\w\a\1\x\j\k\k\9\t\2\0\w\6\r\k\h\h\t\e\t\i\n\c\u\v\z\b\t\5\4\j\w\l\6\e\6\4\h\9\i\8\0\3\x\9\5\b\a\8\r\i\n\8\u\v\l\8\j\x\s\6\l\n\t\p\3\5\m\4\o\r\p\a\x\8\b\s\4\c\6\0\m\u\5\b\j\f\m\j\f\l\j\l\a\x\g\l\r\3\x\9\s\j\b\7\6\7\7\6\o\o\t\s\l\n\2\k\4\5\x\i\e\u\v\r\c\m\q\v\o\h\x\6\j\7\6\n\b\x\4\o\c\k\0\9\1\2\p\r\o\t\1\0\c\d\d\v\x\p\j\x\z\j\n\z\1\k\h\x\u\e\g\8\8\8\s\k\t\t\j\u\g\9\i\w\4\7\6\i\w\1\v\u\o\s\l\o\l\w\a\1\x\r\e\5\5\h\m\p\7\7\g\f\3\w ]] 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.873 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:09.132 [2024-11-29 12:53:40.418395] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:09.132 [2024-11-29 12:53:40.418487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:07:09.132 [2024-11-29 12:53:40.565636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.132 [2024-11-29 12:53:40.622579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.392 [2024-11-29 12:53:40.683979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.392  [2024-11-29T12:53:41.167Z] Copying: 512/512 [B] (average 500 kBps) 00:07:09.652 00:07:09.652 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zs952u9ekq2tcdgt6l11io2mp2rxmcolpggbia4qcifufzmldbr278p04an84ey8fv0mheb88dfv4tv1alcm4x7hral98dndluz1lvr5qjsiucy1h555m8kbxf1eega3f1hqdoj0gik0xhjctemnfga86y1ynk9sthsa2pcv5v43itfkgg5trodkdjq7r9zbon2hfkppn98krjzp9y9345brvs7d72nwjw6o72st07wr8whe5m55insrevka1f8hjsxg4gmr0gfqeiyu42h57pkbnsndsyfqb8lek533rko1u785fv8typ9sdodtjgmqmnm2uqjb59ejvplb6qjixst9syleozn54tu8cx5hiepuooc5vy4aadods38up46cuupihxodvfm1as7c5yxn7mgo15pdd7ssk1kx01k39c3byajkgfwynhrhlfspjwkjutw9zyyg4n8t8k04oa19or3whmbqg5qrnu5wo753dcvhacpx3z57xlcjmxicg879 == \z\s\9\5\2\u\9\e\k\q\2\t\c\d\g\t\6\l\1\1\i\o\2\m\p\2\r\x\m\c\o\l\p\g\g\b\i\a\4\q\c\i\f\u\f\z\m\l\d\b\r\2\7\8\p\0\4\a\n\8\4\e\y\8\f\v\0\m\h\e\b\8\8\d\f\v\4\t\v\1\a\l\c\m\4\x\7\h\r\a\l\9\8\d\n\d\l\u\z\1\l\v\r\5\q\j\s\i\u\c\y\1\h\5\5\5\m\8\k\b\x\f\1\e\e\g\a\3\f\1\h\q\d\o\j\0\g\i\k\0\x\h\j\c\t\e\m\n\f\g\a\8\6\y\1\y\n\k\9\s\t\h\s\a\2\p\c\v\5\v\4\3\i\t\f\k\g\g\5\t\r\o\d\k\d\j\q\7\r\9\z\b\o\n\2\h\f\k\p\p\n\9\8\k\r\j\z\p\9\y\9\3\4\5\b\r\v\s\7\d\7\2\n\w\j\w\6\o\7\2\s\t\0\7\w\r\8\w\h\e\5\m\5\5\i\n\s\r\e\v\k\a\1\f\8\h\j\s\x\g\4\g\m\r\0\g\f\q\e\i\y\u\4\2\h\5\7\p\k\b\n\s\n\d\s\y\f\q\b\8\l\e\k\5\3\3\r\k\o\1\u\7\8\5\f\v\8\t\y\p\9\s\d\o\d\t\j\g\m\q\m\n\m\2\u\q\j\b\5\9\e\j\v\p\l\b\6\q\j\i\x\s\t\9\s\y\l\e\o\z\n\5\4\t\u\8\c\x\5\h\i\e\p\u\o\o\c\5\v\y\4\a\a\d\o\d\s\3\8\u\p\4\6\c\u\u\p\i\h\x\o\d\v\f\m\1\a\s\7\c\5\y\x\n\7\m\g\o\1\5\p\d\d\7\s\s\k\1\k\x\0\1\k\3\9\c\3\b\y\a\j\k\g\f\w\y\n\h\r\h\l\f\s\p\j\w\k\j\u\t\w\9\z\y\y\g\4\n\8\t\8\k\0\4\o\a\1\9\o\r\3\w\h\m\b\q\g\5\q\r\n\u\5\w\o\7\5\3\d\c\v\h\a\c\p\x\3\z\5\7\x\l\c\j\m\x\i\c\g\8\7\9 ]] 00:07:09.652 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.652 12:53:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:09.652 [2024-11-29 12:53:40.980581] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:09.652 [2024-11-29 12:53:40.980905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:07:09.652 [2024-11-29 12:53:41.127973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.911 [2024-11-29 12:53:41.185564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.911 [2024-11-29 12:53:41.245655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.912  [2024-11-29T12:53:41.686Z] Copying: 512/512 [B] (average 500 kBps) 00:07:10.171 00:07:10.171 12:53:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zs952u9ekq2tcdgt6l11io2mp2rxmcolpggbia4qcifufzmldbr278p04an84ey8fv0mheb88dfv4tv1alcm4x7hral98dndluz1lvr5qjsiucy1h555m8kbxf1eega3f1hqdoj0gik0xhjctemnfga86y1ynk9sthsa2pcv5v43itfkgg5trodkdjq7r9zbon2hfkppn98krjzp9y9345brvs7d72nwjw6o72st07wr8whe5m55insrevka1f8hjsxg4gmr0gfqeiyu42h57pkbnsndsyfqb8lek533rko1u785fv8typ9sdodtjgmqmnm2uqjb59ejvplb6qjixst9syleozn54tu8cx5hiepuooc5vy4aadods38up46cuupihxodvfm1as7c5yxn7mgo15pdd7ssk1kx01k39c3byajkgfwynhrhlfspjwkjutw9zyyg4n8t8k04oa19or3whmbqg5qrnu5wo753dcvhacpx3z57xlcjmxicg879 == \z\s\9\5\2\u\9\e\k\q\2\t\c\d\g\t\6\l\1\1\i\o\2\m\p\2\r\x\m\c\o\l\p\g\g\b\i\a\4\q\c\i\f\u\f\z\m\l\d\b\r\2\7\8\p\0\4\a\n\8\4\e\y\8\f\v\0\m\h\e\b\8\8\d\f\v\4\t\v\1\a\l\c\m\4\x\7\h\r\a\l\9\8\d\n\d\l\u\z\1\l\v\r\5\q\j\s\i\u\c\y\1\h\5\5\5\m\8\k\b\x\f\1\e\e\g\a\3\f\1\h\q\d\o\j\0\g\i\k\0\x\h\j\c\t\e\m\n\f\g\a\8\6\y\1\y\n\k\9\s\t\h\s\a\2\p\c\v\5\v\4\3\i\t\f\k\g\g\5\t\r\o\d\k\d\j\q\7\r\9\z\b\o\n\2\h\f\k\p\p\n\9\8\k\r\j\z\p\9\y\9\3\4\5\b\r\v\s\7\d\7\2\n\w\j\w\6\o\7\2\s\t\0\7\w\r\8\w\h\e\5\m\5\5\i\n\s\r\e\v\k\a\1\f\8\h\j\s\x\g\4\g\m\r\0\g\f\q\e\i\y\u\4\2\h\5\7\p\k\b\n\s\n\d\s\y\f\q\b\8\l\e\k\5\3\3\r\k\o\1\u\7\8\5\f\v\8\t\y\p\9\s\d\o\d\t\j\g\m\q\m\n\m\2\u\q\j\b\5\9\e\j\v\p\l\b\6\q\j\i\x\s\t\9\s\y\l\e\o\z\n\5\4\t\u\8\c\x\5\h\i\e\p\u\o\o\c\5\v\y\4\a\a\d\o\d\s\3\8\u\p\4\6\c\u\u\p\i\h\x\o\d\v\f\m\1\a\s\7\c\5\y\x\n\7\m\g\o\1\5\p\d\d\7\s\s\k\1\k\x\0\1\k\3\9\c\3\b\y\a\j\k\g\f\w\y\n\h\r\h\l\f\s\p\j\w\k\j\u\t\w\9\z\y\y\g\4\n\8\t\8\k\0\4\o\a\1\9\o\r\3\w\h\m\b\q\g\5\q\r\n\u\5\w\o\7\5\3\d\c\v\h\a\c\p\x\3\z\5\7\x\l\c\j\m\x\i\c\g\8\7\9 ]] 00:07:10.171 12:53:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.171 12:53:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:10.171 [2024-11-29 12:53:41.547824] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:10.171 [2024-11-29 12:53:41.547941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60374 ] 00:07:10.431 [2024-11-29 12:53:41.695251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.431 [2024-11-29 12:53:41.748330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 [2024-11-29 12:53:41.811212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.431  [2024-11-29T12:53:42.206Z] Copying: 512/512 [B] (average 250 kBps) 00:07:10.691 00:07:10.691 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zs952u9ekq2tcdgt6l11io2mp2rxmcolpggbia4qcifufzmldbr278p04an84ey8fv0mheb88dfv4tv1alcm4x7hral98dndluz1lvr5qjsiucy1h555m8kbxf1eega3f1hqdoj0gik0xhjctemnfga86y1ynk9sthsa2pcv5v43itfkgg5trodkdjq7r9zbon2hfkppn98krjzp9y9345brvs7d72nwjw6o72st07wr8whe5m55insrevka1f8hjsxg4gmr0gfqeiyu42h57pkbnsndsyfqb8lek533rko1u785fv8typ9sdodtjgmqmnm2uqjb59ejvplb6qjixst9syleozn54tu8cx5hiepuooc5vy4aadods38up46cuupihxodvfm1as7c5yxn7mgo15pdd7ssk1kx01k39c3byajkgfwynhrhlfspjwkjutw9zyyg4n8t8k04oa19or3whmbqg5qrnu5wo753dcvhacpx3z57xlcjmxicg879 == \z\s\9\5\2\u\9\e\k\q\2\t\c\d\g\t\6\l\1\1\i\o\2\m\p\2\r\x\m\c\o\l\p\g\g\b\i\a\4\q\c\i\f\u\f\z\m\l\d\b\r\2\7\8\p\0\4\a\n\8\4\e\y\8\f\v\0\m\h\e\b\8\8\d\f\v\4\t\v\1\a\l\c\m\4\x\7\h\r\a\l\9\8\d\n\d\l\u\z\1\l\v\r\5\q\j\s\i\u\c\y\1\h\5\5\5\m\8\k\b\x\f\1\e\e\g\a\3\f\1\h\q\d\o\j\0\g\i\k\0\x\h\j\c\t\e\m\n\f\g\a\8\6\y\1\y\n\k\9\s\t\h\s\a\2\p\c\v\5\v\4\3\i\t\f\k\g\g\5\t\r\o\d\k\d\j\q\7\r\9\z\b\o\n\2\h\f\k\p\p\n\9\8\k\r\j\z\p\9\y\9\3\4\5\b\r\v\s\7\d\7\2\n\w\j\w\6\o\7\2\s\t\0\7\w\r\8\w\h\e\5\m\5\5\i\n\s\r\e\v\k\a\1\f\8\h\j\s\x\g\4\g\m\r\0\g\f\q\e\i\y\u\4\2\h\5\7\p\k\b\n\s\n\d\s\y\f\q\b\8\l\e\k\5\3\3\r\k\o\1\u\7\8\5\f\v\8\t\y\p\9\s\d\o\d\t\j\g\m\q\m\n\m\2\u\q\j\b\5\9\e\j\v\p\l\b\6\q\j\i\x\s\t\9\s\y\l\e\o\z\n\5\4\t\u\8\c\x\5\h\i\e\p\u\o\o\c\5\v\y\4\a\a\d\o\d\s\3\8\u\p\4\6\c\u\u\p\i\h\x\o\d\v\f\m\1\a\s\7\c\5\y\x\n\7\m\g\o\1\5\p\d\d\7\s\s\k\1\k\x\0\1\k\3\9\c\3\b\y\a\j\k\g\f\w\y\n\h\r\h\l\f\s\p\j\w\k\j\u\t\w\9\z\y\y\g\4\n\8\t\8\k\0\4\o\a\1\9\o\r\3\w\h\m\b\q\g\5\q\r\n\u\5\w\o\7\5\3\d\c\v\h\a\c\p\x\3\z\5\7\x\l\c\j\m\x\i\c\g\8\7\9 ]] 00:07:10.691 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.691 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:10.691 [2024-11-29 12:53:42.094758] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:10.691 [2024-11-29 12:53:42.094842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:07:10.950 [2024-11-29 12:53:42.233297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.950 [2024-11-29 12:53:42.290311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.950 [2024-11-29 12:53:42.351907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.950  [2024-11-29T12:53:42.724Z] Copying: 512/512 [B] (average 166 kBps) 00:07:11.209 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zs952u9ekq2tcdgt6l11io2mp2rxmcolpggbia4qcifufzmldbr278p04an84ey8fv0mheb88dfv4tv1alcm4x7hral98dndluz1lvr5qjsiucy1h555m8kbxf1eega3f1hqdoj0gik0xhjctemnfga86y1ynk9sthsa2pcv5v43itfkgg5trodkdjq7r9zbon2hfkppn98krjzp9y9345brvs7d72nwjw6o72st07wr8whe5m55insrevka1f8hjsxg4gmr0gfqeiyu42h57pkbnsndsyfqb8lek533rko1u785fv8typ9sdodtjgmqmnm2uqjb59ejvplb6qjixst9syleozn54tu8cx5hiepuooc5vy4aadods38up46cuupihxodvfm1as7c5yxn7mgo15pdd7ssk1kx01k39c3byajkgfwynhrhlfspjwkjutw9zyyg4n8t8k04oa19or3whmbqg5qrnu5wo753dcvhacpx3z57xlcjmxicg879 == \z\s\9\5\2\u\9\e\k\q\2\t\c\d\g\t\6\l\1\1\i\o\2\m\p\2\r\x\m\c\o\l\p\g\g\b\i\a\4\q\c\i\f\u\f\z\m\l\d\b\r\2\7\8\p\0\4\a\n\8\4\e\y\8\f\v\0\m\h\e\b\8\8\d\f\v\4\t\v\1\a\l\c\m\4\x\7\h\r\a\l\9\8\d\n\d\l\u\z\1\l\v\r\5\q\j\s\i\u\c\y\1\h\5\5\5\m\8\k\b\x\f\1\e\e\g\a\3\f\1\h\q\d\o\j\0\g\i\k\0\x\h\j\c\t\e\m\n\f\g\a\8\6\y\1\y\n\k\9\s\t\h\s\a\2\p\c\v\5\v\4\3\i\t\f\k\g\g\5\t\r\o\d\k\d\j\q\7\r\9\z\b\o\n\2\h\f\k\p\p\n\9\8\k\r\j\z\p\9\y\9\3\4\5\b\r\v\s\7\d\7\2\n\w\j\w\6\o\7\2\s\t\0\7\w\r\8\w\h\e\5\m\5\5\i\n\s\r\e\v\k\a\1\f\8\h\j\s\x\g\4\g\m\r\0\g\f\q\e\i\y\u\4\2\h\5\7\p\k\b\n\s\n\d\s\y\f\q\b\8\l\e\k\5\3\3\r\k\o\1\u\7\8\5\f\v\8\t\y\p\9\s\d\o\d\t\j\g\m\q\m\n\m\2\u\q\j\b\5\9\e\j\v\p\l\b\6\q\j\i\x\s\t\9\s\y\l\e\o\z\n\5\4\t\u\8\c\x\5\h\i\e\p\u\o\o\c\5\v\y\4\a\a\d\o\d\s\3\8\u\p\4\6\c\u\u\p\i\h\x\o\d\v\f\m\1\a\s\7\c\5\y\x\n\7\m\g\o\1\5\p\d\d\7\s\s\k\1\k\x\0\1\k\3\9\c\3\b\y\a\j\k\g\f\w\y\n\h\r\h\l\f\s\p\j\w\k\j\u\t\w\9\z\y\y\g\4\n\8\t\8\k\0\4\o\a\1\9\o\r\3\w\h\m\b\q\g\5\q\r\n\u\5\w\o\7\5\3\d\c\v\h\a\c\p\x\3\z\5\7\x\l\c\j\m\x\i\c\g\8\7\9 ]] 00:07:11.209 00:07:11.209 real 0m4.742s 00:07:11.209 user 0m2.568s 00:07:11.209 sys 0m2.614s 00:07:11.209 ************************************ 00:07:11.209 END TEST dd_flags_misc 00:07:11.209 ************************************ 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:11.209 * Second test run, disabling liburing, forcing AIO 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 ************************************ 00:07:11.209 START TEST dd_flag_append_forced_aio 00:07:11.209 ************************************ 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=gxzcpbn6qtpxjry2dp649uoa5bulj4cd 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=emv7p08kdsjdgvtsffih88a79vdxh6vl 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s gxzcpbn6qtpxjry2dp649uoa5bulj4cd 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s emv7p08kdsjdgvtsffih88a79vdxh6vl 00:07:11.209 12:53:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:11.469 [2024-11-29 12:53:42.723107] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:11.469 [2024-11-29 12:53:42.723342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:07:11.469 [2024-11-29 12:53:42.873852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.469 [2024-11-29 12:53:42.934837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.728 [2024-11-29 12:53:42.994380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.728  [2024-11-29T12:53:43.503Z] Copying: 32/32 [B] (average 31 kBps) 00:07:11.988 00:07:11.988 ************************************ 00:07:11.988 END TEST dd_flag_append_forced_aio 00:07:11.988 ************************************ 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ emv7p08kdsjdgvtsffih88a79vdxh6vlgxzcpbn6qtpxjry2dp649uoa5bulj4cd == \e\m\v\7\p\0\8\k\d\s\j\d\g\v\t\s\f\f\i\h\8\8\a\7\9\v\d\x\h\6\v\l\g\x\z\c\p\b\n\6\q\t\p\x\j\r\y\2\d\p\6\4\9\u\o\a\5\b\u\l\j\4\c\d ]] 00:07:11.988 00:07:11.988 real 0m0.590s 00:07:11.988 user 0m0.327s 00:07:11.988 sys 0m0.141s 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.988 ************************************ 00:07:11.988 START TEST dd_flag_directory_forced_aio 00:07:11.988 ************************************ 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.988 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.989 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.989 [2024-11-29 12:53:43.362309] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:11.989 [2024-11-29 12:53:43.362814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:07:12.248 [2024-11-29 12:53:43.509150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.248 [2024-11-29 12:53:43.579195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.248 [2024-11-29 12:53:43.636994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.248 [2024-11-29 12:53:43.676504] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.248 [2024-11-29 12:53:43.676553] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.248 [2024-11-29 12:53:43.676588] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.509 [2024-11-29 12:53:43.806437] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.509 12:53:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.509 [2024-11-29 12:53:43.939631] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:12.509 [2024-11-29 12:53:43.939734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:07:12.768 [2024-11-29 12:53:44.092065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.768 [2024-11-29 12:53:44.162620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.768 [2024-11-29 12:53:44.225304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.768 [2024-11-29 12:53:44.266608] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.768 [2024-11-29 12:53:44.266987] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.768 [2024-11-29 12:53:44.267021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.028 [2024-11-29 12:53:44.395057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.028 00:07:13.028 real 0m1.165s 00:07:13.028 user 0m0.631s 00:07:13.028 sys 0m0.322s 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.028 ************************************ 00:07:13.028 END TEST dd_flag_directory_forced_aio 00:07:13.028 ************************************ 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:13.028 ************************************ 00:07:13.028 START TEST dd_flag_nofollow_forced_aio 00:07:13.028 ************************************ 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.028 12:53:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.287 [2024-11-29 12:53:44.584734] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:13.287 [2024-11-29 12:53:44.584876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:07:13.287 [2024-11-29 12:53:44.726264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.287 [2024-11-29 12:53:44.785664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.547 [2024-11-29 12:53:44.845021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.547 [2024-11-29 12:53:44.883502] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:13.547 [2024-11-29 12:53:44.883916] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:13.547 [2024-11-29 12:53:44.883946] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.547 [2024-11-29 12:53:45.007846] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.806 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.806 [2024-11-29 12:53:45.127253] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:13.806 [2024-11-29 12:53:45.127570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60492 ] 00:07:13.806 [2024-11-29 12:53:45.268495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.065 [2024-11-29 12:53:45.328797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.065 [2024-11-29 12:53:45.390609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.065 [2024-11-29 12:53:45.431535] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:14.065 [2024-11-29 12:53:45.431604] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:14.065 [2024-11-29 12:53:45.431640] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.065 [2024-11-29 12:53:45.559694] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:14.323 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:14.323 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.323 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:14.323 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.324 12:53:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.324 [2024-11-29 12:53:45.684120] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:14.324 [2024-11-29 12:53:45.684465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:07:14.324 [2024-11-29 12:53:45.823788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.582 [2024-11-29 12:53:45.886058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.582 [2024-11-29 12:53:45.941475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.582  [2024-11-29T12:53:46.356Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.841 00:07:14.841 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 6lnan4nu0aoqf5dzzt5nzinjtewpf8nbvhi4zocyb8nmc4myiy3uwblgbupidqb7bsdqt8t041u6tjierhikxt6q7y9op84rvosblxkzb36oz3dpts6f3zkxvhz4x73tjwp5aok4z569t0z53sdlg2yxkemxtfxdmhx2d7omv6ln241m9016jgfns4af9aau29hcxmzn9ascg99wq4870gplfe0u92gvmphydctelgwpt67loycycspaul5sfvbd6ptgfnbbeht8qh479309chnw2pje4qsfv2pmyn5ccesmz6kje5vnwr35kqceu5t71qvfjddp2rip6v8z437m1la071q1cbospk7tdlhmc3n8q2s0fhduzere6tw58c3o2yh70q7tbzk0a6xcmbl6l57de77ee6kvhaj2m3fblboxccladzmulygpg8wyu4ufsuewywzyruf5hpjdx8exwrtrqlfusvd7td7k120hb7p1dzmtilcbe4cg8apjo8k9 == \6\l\n\a\n\4\n\u\0\a\o\q\f\5\d\z\z\t\5\n\z\i\n\j\t\e\w\p\f\8\n\b\v\h\i\4\z\o\c\y\b\8\n\m\c\4\m\y\i\y\3\u\w\b\l\g\b\u\p\i\d\q\b\7\b\s\d\q\t\8\t\0\4\1\u\6\t\j\i\e\r\h\i\k\x\t\6\q\7\y\9\o\p\8\4\r\v\o\s\b\l\x\k\z\b\3\6\o\z\3\d\p\t\s\6\f\3\z\k\x\v\h\z\4\x\7\3\t\j\w\p\5\a\o\k\4\z\5\6\9\t\0\z\5\3\s\d\l\g\2\y\x\k\e\m\x\t\f\x\d\m\h\x\2\d\7\o\m\v\6\l\n\2\4\1\m\9\0\1\6\j\g\f\n\s\4\a\f\9\a\a\u\2\9\h\c\x\m\z\n\9\a\s\c\g\9\9\w\q\4\8\7\0\g\p\l\f\e\0\u\9\2\g\v\m\p\h\y\d\c\t\e\l\g\w\p\t\6\7\l\o\y\c\y\c\s\p\a\u\l\5\s\f\v\b\d\6\p\t\g\f\n\b\b\e\h\t\8\q\h\4\7\9\3\0\9\c\h\n\w\2\p\j\e\4\q\s\f\v\2\p\m\y\n\5\c\c\e\s\m\z\6\k\j\e\5\v\n\w\r\3\5\k\q\c\e\u\5\t\7\1\q\v\f\j\d\d\p\2\r\i\p\6\v\8\z\4\3\7\m\1\l\a\0\7\1\q\1\c\b\o\s\p\k\7\t\d\l\h\m\c\3\n\8\q\2\s\0\f\h\d\u\z\e\r\e\6\t\w\5\8\c\3\o\2\y\h\7\0\q\7\t\b\z\k\0\a\6\x\c\m\b\l\6\l\5\7\d\e\7\7\e\e\6\k\v\h\a\j\2\m\3\f\b\l\b\o\x\c\c\l\a\d\z\m\u\l\y\g\p\g\8\w\y\u\4\u\f\s\u\e\w\y\w\z\y\r\u\f\5\h\p\j\d\x\8\e\x\w\r\t\r\q\l\f\u\s\v\d\7\t\d\7\k\1\2\0\h\b\7\p\1\d\z\m\t\i\l\c\b\e\4\c\g\8\a\p\j\o\8\k\9 ]] 00:07:14.841 00:07:14.841 real 0m1.677s 00:07:14.841 user 0m0.921s 00:07:14.841 sys 0m0.425s 00:07:14.841 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.841 ************************************ 00:07:14.841 END TEST dd_flag_nofollow_forced_aio 00:07:14.841 ************************************ 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.842 ************************************ 00:07:14.842 START TEST dd_flag_noatime_forced_aio 00:07:14.842 ************************************ 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732884825 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732884826 00:07:14.842 12:53:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:15.780 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.040 [2024-11-29 12:53:47.342496] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:16.040 [2024-11-29 12:53:47.342855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60540 ] 00:07:16.040 [2024-11-29 12:53:47.493744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.299 [2024-11-29 12:53:47.568190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.299 [2024-11-29 12:53:47.630229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.299  [2024-11-29T12:53:48.073Z] Copying: 512/512 [B] (average 500 kBps) 00:07:16.558 00:07:16.558 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.558 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732884825 )) 00:07:16.558 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.558 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732884826 )) 00:07:16.558 12:53:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.558 [2024-11-29 12:53:47.971499] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:16.558 [2024-11-29 12:53:47.971604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60551 ] 00:07:16.817 [2024-11-29 12:53:48.118378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.817 [2024-11-29 12:53:48.176380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.817 [2024-11-29 12:53:48.244039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.817  [2024-11-29T12:53:48.592Z] Copying: 512/512 [B] (average 500 kBps) 00:07:17.077 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732884828 )) 00:07:17.077 ************************************ 00:07:17.077 END TEST dd_flag_noatime_forced_aio 00:07:17.077 ************************************ 00:07:17.077 00:07:17.077 real 0m2.266s 00:07:17.077 user 0m0.679s 00:07:17.077 sys 0m0.337s 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.077 ************************************ 00:07:17.077 START TEST dd_flags_misc_forced_aio 00:07:17.077 ************************************ 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.077 12:53:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:17.387 [2024-11-29 12:53:48.643331] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:17.387 [2024-11-29 12:53:48.643679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60578 ] 00:07:17.387 [2024-11-29 12:53:48.793670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.387 [2024-11-29 12:53:48.861985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.661 [2024-11-29 12:53:48.920845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.661  [2024-11-29T12:53:49.435Z] Copying: 512/512 [B] (average 500 kBps) 00:07:17.920 00:07:17.920 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ofxavwau6kfq74k5n728kf96yn3do52jum0g0ttucfp3a0lulw7rpjdpkimzbuqaxrnoogzikadwmrhz0ii6ijyk2llbdhrudd5jalfzwim34pdhg99fc9puki17xxk1eq9nwihqioukzimu12173630pwm54k12nzxu62ucogqt95i2ek3tn2dunzfkx8d278z2rszowwfs2hf0pg3uuq6kyz6pb8oglna9pes069napb395tjyhu29mzcmjmct1pqom6oanxubx4h6ekwqdvhx8ywx6kndlnlj1je6lest7plj7y5ejrsukx8pbw4ikz6zj8h7l6xfnqlil1ukngcy24q14ea5nyoih6rtlcci4k9vyehrrk0uefr5sjx1u2q7coi0tjl95cz7lzm7kj5a8xw6u43fvhdu8q1han70govxkcfvvpf5p7gnw8ayk54ukr4oh8h5v5b4zzw2edmj37r5joyd73l3ffl57lp653e6ddz0sh205kueetxi == \o\f\x\a\v\w\a\u\6\k\f\q\7\4\k\5\n\7\2\8\k\f\9\6\y\n\3\d\o\5\2\j\u\m\0\g\0\t\t\u\c\f\p\3\a\0\l\u\l\w\7\r\p\j\d\p\k\i\m\z\b\u\q\a\x\r\n\o\o\g\z\i\k\a\d\w\m\r\h\z\0\i\i\6\i\j\y\k\2\l\l\b\d\h\r\u\d\d\5\j\a\l\f\z\w\i\m\3\4\p\d\h\g\9\9\f\c\9\p\u\k\i\1\7\x\x\k\1\e\q\9\n\w\i\h\q\i\o\u\k\z\i\m\u\1\2\1\7\3\6\3\0\p\w\m\5\4\k\1\2\n\z\x\u\6\2\u\c\o\g\q\t\9\5\i\2\e\k\3\t\n\2\d\u\n\z\f\k\x\8\d\2\7\8\z\2\r\s\z\o\w\w\f\s\2\h\f\0\p\g\3\u\u\q\6\k\y\z\6\p\b\8\o\g\l\n\a\9\p\e\s\0\6\9\n\a\p\b\3\9\5\t\j\y\h\u\2\9\m\z\c\m\j\m\c\t\1\p\q\o\m\6\o\a\n\x\u\b\x\4\h\6\e\k\w\q\d\v\h\x\8\y\w\x\6\k\n\d\l\n\l\j\1\j\e\6\l\e\s\t\7\p\l\j\7\y\5\e\j\r\s\u\k\x\8\p\b\w\4\i\k\z\6\z\j\8\h\7\l\6\x\f\n\q\l\i\l\1\u\k\n\g\c\y\2\4\q\1\4\e\a\5\n\y\o\i\h\6\r\t\l\c\c\i\4\k\9\v\y\e\h\r\r\k\0\u\e\f\r\5\s\j\x\1\u\2\q\7\c\o\i\0\t\j\l\9\5\c\z\7\l\z\m\7\k\j\5\a\8\x\w\6\u\4\3\f\v\h\d\u\8\q\1\h\a\n\7\0\g\o\v\x\k\c\f\v\v\p\f\5\p\7\g\n\w\8\a\y\k\5\4\u\k\r\4\o\h\8\h\5\v\5\b\4\z\z\w\2\e\d\m\j\3\7\r\5\j\o\y\d\7\3\l\3\f\f\l\5\7\l\p\6\5\3\e\6\d\d\z\0\s\h\2\0\5\k\u\e\e\t\x\i ]] 00:07:17.920 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.920 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:17.920 [2024-11-29 12:53:49.239043] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:17.920 [2024-11-29 12:53:49.239334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:07:17.920 [2024-11-29 12:53:49.385919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.180 [2024-11-29 12:53:49.434121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.180 [2024-11-29 12:53:49.488212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.180  [2024-11-29T12:53:49.954Z] Copying: 512/512 [B] (average 500 kBps) 00:07:18.439 00:07:18.439 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ofxavwau6kfq74k5n728kf96yn3do52jum0g0ttucfp3a0lulw7rpjdpkimzbuqaxrnoogzikadwmrhz0ii6ijyk2llbdhrudd5jalfzwim34pdhg99fc9puki17xxk1eq9nwihqioukzimu12173630pwm54k12nzxu62ucogqt95i2ek3tn2dunzfkx8d278z2rszowwfs2hf0pg3uuq6kyz6pb8oglna9pes069napb395tjyhu29mzcmjmct1pqom6oanxubx4h6ekwqdvhx8ywx6kndlnlj1je6lest7plj7y5ejrsukx8pbw4ikz6zj8h7l6xfnqlil1ukngcy24q14ea5nyoih6rtlcci4k9vyehrrk0uefr5sjx1u2q7coi0tjl95cz7lzm7kj5a8xw6u43fvhdu8q1han70govxkcfvvpf5p7gnw8ayk54ukr4oh8h5v5b4zzw2edmj37r5joyd73l3ffl57lp653e6ddz0sh205kueetxi == \o\f\x\a\v\w\a\u\6\k\f\q\7\4\k\5\n\7\2\8\k\f\9\6\y\n\3\d\o\5\2\j\u\m\0\g\0\t\t\u\c\f\p\3\a\0\l\u\l\w\7\r\p\j\d\p\k\i\m\z\b\u\q\a\x\r\n\o\o\g\z\i\k\a\d\w\m\r\h\z\0\i\i\6\i\j\y\k\2\l\l\b\d\h\r\u\d\d\5\j\a\l\f\z\w\i\m\3\4\p\d\h\g\9\9\f\c\9\p\u\k\i\1\7\x\x\k\1\e\q\9\n\w\i\h\q\i\o\u\k\z\i\m\u\1\2\1\7\3\6\3\0\p\w\m\5\4\k\1\2\n\z\x\u\6\2\u\c\o\g\q\t\9\5\i\2\e\k\3\t\n\2\d\u\n\z\f\k\x\8\d\2\7\8\z\2\r\s\z\o\w\w\f\s\2\h\f\0\p\g\3\u\u\q\6\k\y\z\6\p\b\8\o\g\l\n\a\9\p\e\s\0\6\9\n\a\p\b\3\9\5\t\j\y\h\u\2\9\m\z\c\m\j\m\c\t\1\p\q\o\m\6\o\a\n\x\u\b\x\4\h\6\e\k\w\q\d\v\h\x\8\y\w\x\6\k\n\d\l\n\l\j\1\j\e\6\l\e\s\t\7\p\l\j\7\y\5\e\j\r\s\u\k\x\8\p\b\w\4\i\k\z\6\z\j\8\h\7\l\6\x\f\n\q\l\i\l\1\u\k\n\g\c\y\2\4\q\1\4\e\a\5\n\y\o\i\h\6\r\t\l\c\c\i\4\k\9\v\y\e\h\r\r\k\0\u\e\f\r\5\s\j\x\1\u\2\q\7\c\o\i\0\t\j\l\9\5\c\z\7\l\z\m\7\k\j\5\a\8\x\w\6\u\4\3\f\v\h\d\u\8\q\1\h\a\n\7\0\g\o\v\x\k\c\f\v\v\p\f\5\p\7\g\n\w\8\a\y\k\5\4\u\k\r\4\o\h\8\h\5\v\5\b\4\z\z\w\2\e\d\m\j\3\7\r\5\j\o\y\d\7\3\l\3\f\f\l\5\7\l\p\6\5\3\e\6\d\d\z\0\s\h\2\0\5\k\u\e\e\t\x\i ]] 00:07:18.439 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:18.439 12:53:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:18.439 [2024-11-29 12:53:49.789346] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:18.439 [2024-11-29 12:53:49.789701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60597 ] 00:07:18.439 [2024-11-29 12:53:49.936141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.698 [2024-11-29 12:53:50.003686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.698 [2024-11-29 12:53:50.070322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.698  [2024-11-29T12:53:50.472Z] Copying: 512/512 [B] (average 125 kBps) 00:07:18.957 00:07:18.957 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ofxavwau6kfq74k5n728kf96yn3do52jum0g0ttucfp3a0lulw7rpjdpkimzbuqaxrnoogzikadwmrhz0ii6ijyk2llbdhrudd5jalfzwim34pdhg99fc9puki17xxk1eq9nwihqioukzimu12173630pwm54k12nzxu62ucogqt95i2ek3tn2dunzfkx8d278z2rszowwfs2hf0pg3uuq6kyz6pb8oglna9pes069napb395tjyhu29mzcmjmct1pqom6oanxubx4h6ekwqdvhx8ywx6kndlnlj1je6lest7plj7y5ejrsukx8pbw4ikz6zj8h7l6xfnqlil1ukngcy24q14ea5nyoih6rtlcci4k9vyehrrk0uefr5sjx1u2q7coi0tjl95cz7lzm7kj5a8xw6u43fvhdu8q1han70govxkcfvvpf5p7gnw8ayk54ukr4oh8h5v5b4zzw2edmj37r5joyd73l3ffl57lp653e6ddz0sh205kueetxi == \o\f\x\a\v\w\a\u\6\k\f\q\7\4\k\5\n\7\2\8\k\f\9\6\y\n\3\d\o\5\2\j\u\m\0\g\0\t\t\u\c\f\p\3\a\0\l\u\l\w\7\r\p\j\d\p\k\i\m\z\b\u\q\a\x\r\n\o\o\g\z\i\k\a\d\w\m\r\h\z\0\i\i\6\i\j\y\k\2\l\l\b\d\h\r\u\d\d\5\j\a\l\f\z\w\i\m\3\4\p\d\h\g\9\9\f\c\9\p\u\k\i\1\7\x\x\k\1\e\q\9\n\w\i\h\q\i\o\u\k\z\i\m\u\1\2\1\7\3\6\3\0\p\w\m\5\4\k\1\2\n\z\x\u\6\2\u\c\o\g\q\t\9\5\i\2\e\k\3\t\n\2\d\u\n\z\f\k\x\8\d\2\7\8\z\2\r\s\z\o\w\w\f\s\2\h\f\0\p\g\3\u\u\q\6\k\y\z\6\p\b\8\o\g\l\n\a\9\p\e\s\0\6\9\n\a\p\b\3\9\5\t\j\y\h\u\2\9\m\z\c\m\j\m\c\t\1\p\q\o\m\6\o\a\n\x\u\b\x\4\h\6\e\k\w\q\d\v\h\x\8\y\w\x\6\k\n\d\l\n\l\j\1\j\e\6\l\e\s\t\7\p\l\j\7\y\5\e\j\r\s\u\k\x\8\p\b\w\4\i\k\z\6\z\j\8\h\7\l\6\x\f\n\q\l\i\l\1\u\k\n\g\c\y\2\4\q\1\4\e\a\5\n\y\o\i\h\6\r\t\l\c\c\i\4\k\9\v\y\e\h\r\r\k\0\u\e\f\r\5\s\j\x\1\u\2\q\7\c\o\i\0\t\j\l\9\5\c\z\7\l\z\m\7\k\j\5\a\8\x\w\6\u\4\3\f\v\h\d\u\8\q\1\h\a\n\7\0\g\o\v\x\k\c\f\v\v\p\f\5\p\7\g\n\w\8\a\y\k\5\4\u\k\r\4\o\h\8\h\5\v\5\b\4\z\z\w\2\e\d\m\j\3\7\r\5\j\o\y\d\7\3\l\3\f\f\l\5\7\l\p\6\5\3\e\6\d\d\z\0\s\h\2\0\5\k\u\e\e\t\x\i ]] 00:07:18.957 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:18.957 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:18.957 [2024-11-29 12:53:50.410944] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:18.957 [2024-11-29 12:53:50.411042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:07:19.216 [2024-11-29 12:53:50.563538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.216 [2024-11-29 12:53:50.634132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.216 [2024-11-29 12:53:50.693144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.476  [2024-11-29T12:53:50.991Z] Copying: 512/512 [B] (average 250 kBps) 00:07:19.476 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ofxavwau6kfq74k5n728kf96yn3do52jum0g0ttucfp3a0lulw7rpjdpkimzbuqaxrnoogzikadwmrhz0ii6ijyk2llbdhrudd5jalfzwim34pdhg99fc9puki17xxk1eq9nwihqioukzimu12173630pwm54k12nzxu62ucogqt95i2ek3tn2dunzfkx8d278z2rszowwfs2hf0pg3uuq6kyz6pb8oglna9pes069napb395tjyhu29mzcmjmct1pqom6oanxubx4h6ekwqdvhx8ywx6kndlnlj1je6lest7plj7y5ejrsukx8pbw4ikz6zj8h7l6xfnqlil1ukngcy24q14ea5nyoih6rtlcci4k9vyehrrk0uefr5sjx1u2q7coi0tjl95cz7lzm7kj5a8xw6u43fvhdu8q1han70govxkcfvvpf5p7gnw8ayk54ukr4oh8h5v5b4zzw2edmj37r5joyd73l3ffl57lp653e6ddz0sh205kueetxi == \o\f\x\a\v\w\a\u\6\k\f\q\7\4\k\5\n\7\2\8\k\f\9\6\y\n\3\d\o\5\2\j\u\m\0\g\0\t\t\u\c\f\p\3\a\0\l\u\l\w\7\r\p\j\d\p\k\i\m\z\b\u\q\a\x\r\n\o\o\g\z\i\k\a\d\w\m\r\h\z\0\i\i\6\i\j\y\k\2\l\l\b\d\h\r\u\d\d\5\j\a\l\f\z\w\i\m\3\4\p\d\h\g\9\9\f\c\9\p\u\k\i\1\7\x\x\k\1\e\q\9\n\w\i\h\q\i\o\u\k\z\i\m\u\1\2\1\7\3\6\3\0\p\w\m\5\4\k\1\2\n\z\x\u\6\2\u\c\o\g\q\t\9\5\i\2\e\k\3\t\n\2\d\u\n\z\f\k\x\8\d\2\7\8\z\2\r\s\z\o\w\w\f\s\2\h\f\0\p\g\3\u\u\q\6\k\y\z\6\p\b\8\o\g\l\n\a\9\p\e\s\0\6\9\n\a\p\b\3\9\5\t\j\y\h\u\2\9\m\z\c\m\j\m\c\t\1\p\q\o\m\6\o\a\n\x\u\b\x\4\h\6\e\k\w\q\d\v\h\x\8\y\w\x\6\k\n\d\l\n\l\j\1\j\e\6\l\e\s\t\7\p\l\j\7\y\5\e\j\r\s\u\k\x\8\p\b\w\4\i\k\z\6\z\j\8\h\7\l\6\x\f\n\q\l\i\l\1\u\k\n\g\c\y\2\4\q\1\4\e\a\5\n\y\o\i\h\6\r\t\l\c\c\i\4\k\9\v\y\e\h\r\r\k\0\u\e\f\r\5\s\j\x\1\u\2\q\7\c\o\i\0\t\j\l\9\5\c\z\7\l\z\m\7\k\j\5\a\8\x\w\6\u\4\3\f\v\h\d\u\8\q\1\h\a\n\7\0\g\o\v\x\k\c\f\v\v\p\f\5\p\7\g\n\w\8\a\y\k\5\4\u\k\r\4\o\h\8\h\5\v\5\b\4\z\z\w\2\e\d\m\j\3\7\r\5\j\o\y\d\7\3\l\3\f\f\l\5\7\l\p\6\5\3\e\6\d\d\z\0\s\h\2\0\5\k\u\e\e\t\x\i ]] 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.476 12:53:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:19.734 [2024-11-29 12:53:51.021534] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:19.734 [2024-11-29 12:53:51.021641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60613 ] 00:07:19.734 [2024-11-29 12:53:51.163366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.734 [2024-11-29 12:53:51.226536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.992 [2024-11-29 12:53:51.286483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.992  [2024-11-29T12:53:51.767Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.252 00:07:20.252 12:53:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ekthvhzqcrp8ycon3phvhkw1hiyodvt334vuzkuyx4dr5et76ni5zo0s7trknb122ejo4fqveiszms581g6gygbp3hn3kz0qezlbokv2wna0jn1une3fzkoru2lqevkimv1klaxyndkpitf8wgsu66nyendl788j6elfb3rsplu2wlm8cxmnp2m0cpmwof2l1grxvivqlf8txb7dhy4fo7s8dnc84a9iyop0szt3zgccyrfbzsr8y7wqnoipnu3xaqqnsd0dhkjla0tcjkciwqbot9yl4ppqlr6r7cy5w2ez0rkqh3i3rrkc683u3fn0x1fh37uus0bgyec5x26ctnrv4sufz581vcba3t1uv9222hvwl1lylgruw9x9byc2fndt5tg7qrk5yxelpxm2sh3tiqu353i0k7jvvs95w1gmiampdzuniu2gtp5xxqjappdeamjh2r4b9mv1nnj2rp4wwoemv3gpz0nnyd636uover1qlm2y728jdtgnh2db == \e\k\t\h\v\h\z\q\c\r\p\8\y\c\o\n\3\p\h\v\h\k\w\1\h\i\y\o\d\v\t\3\3\4\v\u\z\k\u\y\x\4\d\r\5\e\t\7\6\n\i\5\z\o\0\s\7\t\r\k\n\b\1\2\2\e\j\o\4\f\q\v\e\i\s\z\m\s\5\8\1\g\6\g\y\g\b\p\3\h\n\3\k\z\0\q\e\z\l\b\o\k\v\2\w\n\a\0\j\n\1\u\n\e\3\f\z\k\o\r\u\2\l\q\e\v\k\i\m\v\1\k\l\a\x\y\n\d\k\p\i\t\f\8\w\g\s\u\6\6\n\y\e\n\d\l\7\8\8\j\6\e\l\f\b\3\r\s\p\l\u\2\w\l\m\8\c\x\m\n\p\2\m\0\c\p\m\w\o\f\2\l\1\g\r\x\v\i\v\q\l\f\8\t\x\b\7\d\h\y\4\f\o\7\s\8\d\n\c\8\4\a\9\i\y\o\p\0\s\z\t\3\z\g\c\c\y\r\f\b\z\s\r\8\y\7\w\q\n\o\i\p\n\u\3\x\a\q\q\n\s\d\0\d\h\k\j\l\a\0\t\c\j\k\c\i\w\q\b\o\t\9\y\l\4\p\p\q\l\r\6\r\7\c\y\5\w\2\e\z\0\r\k\q\h\3\i\3\r\r\k\c\6\8\3\u\3\f\n\0\x\1\f\h\3\7\u\u\s\0\b\g\y\e\c\5\x\2\6\c\t\n\r\v\4\s\u\f\z\5\8\1\v\c\b\a\3\t\1\u\v\9\2\2\2\h\v\w\l\1\l\y\l\g\r\u\w\9\x\9\b\y\c\2\f\n\d\t\5\t\g\7\q\r\k\5\y\x\e\l\p\x\m\2\s\h\3\t\i\q\u\3\5\3\i\0\k\7\j\v\v\s\9\5\w\1\g\m\i\a\m\p\d\z\u\n\i\u\2\g\t\p\5\x\x\q\j\a\p\p\d\e\a\m\j\h\2\r\4\b\9\m\v\1\n\n\j\2\r\p\4\w\w\o\e\m\v\3\g\p\z\0\n\n\y\d\6\3\6\u\o\v\e\r\1\q\l\m\2\y\7\2\8\j\d\t\g\n\h\2\d\b ]] 00:07:20.252 12:53:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.252 12:53:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:20.252 [2024-11-29 12:53:51.583014] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:20.252 [2024-11-29 12:53:51.583300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:07:20.252 [2024-11-29 12:53:51.726704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.511 [2024-11-29 12:53:51.788366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.511 [2024-11-29 12:53:51.847920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.511  [2024-11-29T12:53:52.285Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.770 00:07:20.770 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ekthvhzqcrp8ycon3phvhkw1hiyodvt334vuzkuyx4dr5et76ni5zo0s7trknb122ejo4fqveiszms581g6gygbp3hn3kz0qezlbokv2wna0jn1une3fzkoru2lqevkimv1klaxyndkpitf8wgsu66nyendl788j6elfb3rsplu2wlm8cxmnp2m0cpmwof2l1grxvivqlf8txb7dhy4fo7s8dnc84a9iyop0szt3zgccyrfbzsr8y7wqnoipnu3xaqqnsd0dhkjla0tcjkciwqbot9yl4ppqlr6r7cy5w2ez0rkqh3i3rrkc683u3fn0x1fh37uus0bgyec5x26ctnrv4sufz581vcba3t1uv9222hvwl1lylgruw9x9byc2fndt5tg7qrk5yxelpxm2sh3tiqu353i0k7jvvs95w1gmiampdzuniu2gtp5xxqjappdeamjh2r4b9mv1nnj2rp4wwoemv3gpz0nnyd636uover1qlm2y728jdtgnh2db == \e\k\t\h\v\h\z\q\c\r\p\8\y\c\o\n\3\p\h\v\h\k\w\1\h\i\y\o\d\v\t\3\3\4\v\u\z\k\u\y\x\4\d\r\5\e\t\7\6\n\i\5\z\o\0\s\7\t\r\k\n\b\1\2\2\e\j\o\4\f\q\v\e\i\s\z\m\s\5\8\1\g\6\g\y\g\b\p\3\h\n\3\k\z\0\q\e\z\l\b\o\k\v\2\w\n\a\0\j\n\1\u\n\e\3\f\z\k\o\r\u\2\l\q\e\v\k\i\m\v\1\k\l\a\x\y\n\d\k\p\i\t\f\8\w\g\s\u\6\6\n\y\e\n\d\l\7\8\8\j\6\e\l\f\b\3\r\s\p\l\u\2\w\l\m\8\c\x\m\n\p\2\m\0\c\p\m\w\o\f\2\l\1\g\r\x\v\i\v\q\l\f\8\t\x\b\7\d\h\y\4\f\o\7\s\8\d\n\c\8\4\a\9\i\y\o\p\0\s\z\t\3\z\g\c\c\y\r\f\b\z\s\r\8\y\7\w\q\n\o\i\p\n\u\3\x\a\q\q\n\s\d\0\d\h\k\j\l\a\0\t\c\j\k\c\i\w\q\b\o\t\9\y\l\4\p\p\q\l\r\6\r\7\c\y\5\w\2\e\z\0\r\k\q\h\3\i\3\r\r\k\c\6\8\3\u\3\f\n\0\x\1\f\h\3\7\u\u\s\0\b\g\y\e\c\5\x\2\6\c\t\n\r\v\4\s\u\f\z\5\8\1\v\c\b\a\3\t\1\u\v\9\2\2\2\h\v\w\l\1\l\y\l\g\r\u\w\9\x\9\b\y\c\2\f\n\d\t\5\t\g\7\q\r\k\5\y\x\e\l\p\x\m\2\s\h\3\t\i\q\u\3\5\3\i\0\k\7\j\v\v\s\9\5\w\1\g\m\i\a\m\p\d\z\u\n\i\u\2\g\t\p\5\x\x\q\j\a\p\p\d\e\a\m\j\h\2\r\4\b\9\m\v\1\n\n\j\2\r\p\4\w\w\o\e\m\v\3\g\p\z\0\n\n\y\d\6\3\6\u\o\v\e\r\1\q\l\m\2\y\7\2\8\j\d\t\g\n\h\2\d\b ]] 00:07:20.770 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.770 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:20.770 [2024-11-29 12:53:52.143771] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:20.770 [2024-11-29 12:53:52.144086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60630 ] 00:07:21.029 [2024-11-29 12:53:52.284962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.029 [2024-11-29 12:53:52.340390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.029 [2024-11-29 12:53:52.398118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.029  [2024-11-29T12:53:52.803Z] Copying: 512/512 [B] (average 166 kBps) 00:07:21.288 00:07:21.288 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ekthvhzqcrp8ycon3phvhkw1hiyodvt334vuzkuyx4dr5et76ni5zo0s7trknb122ejo4fqveiszms581g6gygbp3hn3kz0qezlbokv2wna0jn1une3fzkoru2lqevkimv1klaxyndkpitf8wgsu66nyendl788j6elfb3rsplu2wlm8cxmnp2m0cpmwof2l1grxvivqlf8txb7dhy4fo7s8dnc84a9iyop0szt3zgccyrfbzsr8y7wqnoipnu3xaqqnsd0dhkjla0tcjkciwqbot9yl4ppqlr6r7cy5w2ez0rkqh3i3rrkc683u3fn0x1fh37uus0bgyec5x26ctnrv4sufz581vcba3t1uv9222hvwl1lylgruw9x9byc2fndt5tg7qrk5yxelpxm2sh3tiqu353i0k7jvvs95w1gmiampdzuniu2gtp5xxqjappdeamjh2r4b9mv1nnj2rp4wwoemv3gpz0nnyd636uover1qlm2y728jdtgnh2db == \e\k\t\h\v\h\z\q\c\r\p\8\y\c\o\n\3\p\h\v\h\k\w\1\h\i\y\o\d\v\t\3\3\4\v\u\z\k\u\y\x\4\d\r\5\e\t\7\6\n\i\5\z\o\0\s\7\t\r\k\n\b\1\2\2\e\j\o\4\f\q\v\e\i\s\z\m\s\5\8\1\g\6\g\y\g\b\p\3\h\n\3\k\z\0\q\e\z\l\b\o\k\v\2\w\n\a\0\j\n\1\u\n\e\3\f\z\k\o\r\u\2\l\q\e\v\k\i\m\v\1\k\l\a\x\y\n\d\k\p\i\t\f\8\w\g\s\u\6\6\n\y\e\n\d\l\7\8\8\j\6\e\l\f\b\3\r\s\p\l\u\2\w\l\m\8\c\x\m\n\p\2\m\0\c\p\m\w\o\f\2\l\1\g\r\x\v\i\v\q\l\f\8\t\x\b\7\d\h\y\4\f\o\7\s\8\d\n\c\8\4\a\9\i\y\o\p\0\s\z\t\3\z\g\c\c\y\r\f\b\z\s\r\8\y\7\w\q\n\o\i\p\n\u\3\x\a\q\q\n\s\d\0\d\h\k\j\l\a\0\t\c\j\k\c\i\w\q\b\o\t\9\y\l\4\p\p\q\l\r\6\r\7\c\y\5\w\2\e\z\0\r\k\q\h\3\i\3\r\r\k\c\6\8\3\u\3\f\n\0\x\1\f\h\3\7\u\u\s\0\b\g\y\e\c\5\x\2\6\c\t\n\r\v\4\s\u\f\z\5\8\1\v\c\b\a\3\t\1\u\v\9\2\2\2\h\v\w\l\1\l\y\l\g\r\u\w\9\x\9\b\y\c\2\f\n\d\t\5\t\g\7\q\r\k\5\y\x\e\l\p\x\m\2\s\h\3\t\i\q\u\3\5\3\i\0\k\7\j\v\v\s\9\5\w\1\g\m\i\a\m\p\d\z\u\n\i\u\2\g\t\p\5\x\x\q\j\a\p\p\d\e\a\m\j\h\2\r\4\b\9\m\v\1\n\n\j\2\r\p\4\w\w\o\e\m\v\3\g\p\z\0\n\n\y\d\6\3\6\u\o\v\e\r\1\q\l\m\2\y\7\2\8\j\d\t\g\n\h\2\d\b ]] 00:07:21.288 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.288 12:53:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:21.288 [2024-11-29 12:53:52.709217] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:21.288 [2024-11-29 12:53:52.709307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:07:21.547 [2024-11-29 12:53:52.845598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.547 [2024-11-29 12:53:52.902382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.547 [2024-11-29 12:53:52.963672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.547  [2024-11-29T12:53:53.321Z] Copying: 512/512 [B] (average 500 kBps) 00:07:21.806 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ekthvhzqcrp8ycon3phvhkw1hiyodvt334vuzkuyx4dr5et76ni5zo0s7trknb122ejo4fqveiszms581g6gygbp3hn3kz0qezlbokv2wna0jn1une3fzkoru2lqevkimv1klaxyndkpitf8wgsu66nyendl788j6elfb3rsplu2wlm8cxmnp2m0cpmwof2l1grxvivqlf8txb7dhy4fo7s8dnc84a9iyop0szt3zgccyrfbzsr8y7wqnoipnu3xaqqnsd0dhkjla0tcjkciwqbot9yl4ppqlr6r7cy5w2ez0rkqh3i3rrkc683u3fn0x1fh37uus0bgyec5x26ctnrv4sufz581vcba3t1uv9222hvwl1lylgruw9x9byc2fndt5tg7qrk5yxelpxm2sh3tiqu353i0k7jvvs95w1gmiampdzuniu2gtp5xxqjappdeamjh2r4b9mv1nnj2rp4wwoemv3gpz0nnyd636uover1qlm2y728jdtgnh2db == \e\k\t\h\v\h\z\q\c\r\p\8\y\c\o\n\3\p\h\v\h\k\w\1\h\i\y\o\d\v\t\3\3\4\v\u\z\k\u\y\x\4\d\r\5\e\t\7\6\n\i\5\z\o\0\s\7\t\r\k\n\b\1\2\2\e\j\o\4\f\q\v\e\i\s\z\m\s\5\8\1\g\6\g\y\g\b\p\3\h\n\3\k\z\0\q\e\z\l\b\o\k\v\2\w\n\a\0\j\n\1\u\n\e\3\f\z\k\o\r\u\2\l\q\e\v\k\i\m\v\1\k\l\a\x\y\n\d\k\p\i\t\f\8\w\g\s\u\6\6\n\y\e\n\d\l\7\8\8\j\6\e\l\f\b\3\r\s\p\l\u\2\w\l\m\8\c\x\m\n\p\2\m\0\c\p\m\w\o\f\2\l\1\g\r\x\v\i\v\q\l\f\8\t\x\b\7\d\h\y\4\f\o\7\s\8\d\n\c\8\4\a\9\i\y\o\p\0\s\z\t\3\z\g\c\c\y\r\f\b\z\s\r\8\y\7\w\q\n\o\i\p\n\u\3\x\a\q\q\n\s\d\0\d\h\k\j\l\a\0\t\c\j\k\c\i\w\q\b\o\t\9\y\l\4\p\p\q\l\r\6\r\7\c\y\5\w\2\e\z\0\r\k\q\h\3\i\3\r\r\k\c\6\8\3\u\3\f\n\0\x\1\f\h\3\7\u\u\s\0\b\g\y\e\c\5\x\2\6\c\t\n\r\v\4\s\u\f\z\5\8\1\v\c\b\a\3\t\1\u\v\9\2\2\2\h\v\w\l\1\l\y\l\g\r\u\w\9\x\9\b\y\c\2\f\n\d\t\5\t\g\7\q\r\k\5\y\x\e\l\p\x\m\2\s\h\3\t\i\q\u\3\5\3\i\0\k\7\j\v\v\s\9\5\w\1\g\m\i\a\m\p\d\z\u\n\i\u\2\g\t\p\5\x\x\q\j\a\p\p\d\e\a\m\j\h\2\r\4\b\9\m\v\1\n\n\j\2\r\p\4\w\w\o\e\m\v\3\g\p\z\0\n\n\y\d\6\3\6\u\o\v\e\r\1\q\l\m\2\y\7\2\8\j\d\t\g\n\h\2\d\b ]] 00:07:21.806 00:07:21.806 real 0m4.646s 00:07:21.806 user 0m2.483s 00:07:21.806 sys 0m1.181s 00:07:21.806 ************************************ 00:07:21.806 END TEST dd_flags_misc_forced_aio 00:07:21.806 ************************************ 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.806 ************************************ 00:07:21.806 END TEST spdk_dd_posix 00:07:21.806 ************************************ 00:07:21.806 00:07:21.806 real 0m22.099s 00:07:21.806 user 0m10.852s 00:07:21.806 sys 0m7.609s 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.806 12:53:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 12:53:53 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:21.806 12:53:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.806 12:53:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.806 12:53:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 ************************************ 00:07:22.065 START TEST spdk_dd_malloc 00:07:22.065 ************************************ 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:22.065 * Looking for test storage... 00:07:22.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.065 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.066 --rc genhtml_branch_coverage=1 00:07:22.066 --rc genhtml_function_coverage=1 00:07:22.066 --rc genhtml_legend=1 00:07:22.066 --rc geninfo_all_blocks=1 00:07:22.066 --rc geninfo_unexecuted_blocks=1 00:07:22.066 00:07:22.066 ' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.066 --rc genhtml_branch_coverage=1 00:07:22.066 --rc genhtml_function_coverage=1 00:07:22.066 --rc genhtml_legend=1 00:07:22.066 --rc geninfo_all_blocks=1 00:07:22.066 --rc geninfo_unexecuted_blocks=1 00:07:22.066 00:07:22.066 ' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.066 --rc genhtml_branch_coverage=1 00:07:22.066 --rc genhtml_function_coverage=1 00:07:22.066 --rc genhtml_legend=1 00:07:22.066 --rc geninfo_all_blocks=1 00:07:22.066 --rc geninfo_unexecuted_blocks=1 00:07:22.066 00:07:22.066 ' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.066 --rc genhtml_branch_coverage=1 00:07:22.066 --rc genhtml_function_coverage=1 00:07:22.066 --rc genhtml_legend=1 00:07:22.066 --rc geninfo_all_blocks=1 00:07:22.066 --rc geninfo_unexecuted_blocks=1 00:07:22.066 00:07:22.066 ' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:22.066 ************************************ 00:07:22.066 START TEST dd_malloc_copy 00:07:22.066 ************************************ 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:22.066 12:53:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.326 [2024-11-29 12:53:53.600427] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:22.326 [2024-11-29 12:53:53.600739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:07:22.326 { 00:07:22.326 "subsystems": [ 00:07:22.326 { 00:07:22.326 "subsystem": "bdev", 00:07:22.326 "config": [ 00:07:22.326 { 00:07:22.326 "params": { 00:07:22.326 "block_size": 512, 00:07:22.326 "num_blocks": 1048576, 00:07:22.326 "name": "malloc0" 00:07:22.326 }, 00:07:22.326 "method": "bdev_malloc_create" 00:07:22.326 }, 00:07:22.326 { 00:07:22.326 "params": { 00:07:22.326 "block_size": 512, 00:07:22.326 "num_blocks": 1048576, 00:07:22.326 "name": "malloc1" 00:07:22.326 }, 00:07:22.326 "method": "bdev_malloc_create" 00:07:22.326 }, 00:07:22.326 { 00:07:22.326 "method": "bdev_wait_for_examine" 00:07:22.326 } 00:07:22.326 ] 00:07:22.326 } 00:07:22.326 ] 00:07:22.326 } 00:07:22.326 [2024-11-29 12:53:53.749965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.326 [2024-11-29 12:53:53.814216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.585 [2024-11-29 12:53:53.874281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.962  [2024-11-29T12:53:56.413Z] Copying: 191/512 [MB] (191 MBps) [2024-11-29T12:53:56.982Z] Copying: 396/512 [MB] (204 MBps) [2024-11-29T12:53:57.550Z] Copying: 512/512 [MB] (average 199 MBps) 00:07:26.035 00:07:26.035 12:53:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:26.035 12:53:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:26.035 12:53:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:26.035 12:53:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.035 [2024-11-29 12:53:57.460779] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:26.035 [2024-11-29 12:53:57.461775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ] 00:07:26.035 { 00:07:26.035 "subsystems": [ 00:07:26.035 { 00:07:26.035 "subsystem": "bdev", 00:07:26.035 "config": [ 00:07:26.035 { 00:07:26.035 "params": { 00:07:26.035 "block_size": 512, 00:07:26.035 "num_blocks": 1048576, 00:07:26.035 "name": "malloc0" 00:07:26.035 }, 00:07:26.035 "method": "bdev_malloc_create" 00:07:26.035 }, 00:07:26.035 { 00:07:26.035 "params": { 00:07:26.035 "block_size": 512, 00:07:26.035 "num_blocks": 1048576, 00:07:26.035 "name": "malloc1" 00:07:26.035 }, 00:07:26.035 "method": "bdev_malloc_create" 00:07:26.035 }, 00:07:26.035 { 00:07:26.035 "method": "bdev_wait_for_examine" 00:07:26.035 } 00:07:26.035 ] 00:07:26.035 } 00:07:26.035 ] 00:07:26.035 } 00:07:26.294 [2024-11-29 12:53:57.606843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.294 [2024-11-29 12:53:57.660530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.294 [2024-11-29 12:53:57.715953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.669  [2024-11-29T12:54:00.119Z] Copying: 208/512 [MB] (208 MBps) [2024-11-29T12:54:00.688Z] Copying: 424/512 [MB] (216 MBps) [2024-11-29T12:54:01.254Z] Copying: 512/512 [MB] (average 211 MBps) 00:07:29.739 00:07:29.739 ************************************ 00:07:29.739 END TEST dd_malloc_copy 00:07:29.739 00:07:29.739 real 0m7.535s 00:07:29.739 user 0m6.498s 00:07:29.739 sys 0m0.874s 00:07:29.739 12:54:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.739 12:54:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.739 ************************************ 00:07:29.739 ************************************ 00:07:29.739 END TEST spdk_dd_malloc 00:07:29.739 ************************************ 00:07:29.739 00:07:29.739 real 0m7.794s 00:07:29.739 user 0m6.637s 00:07:29.739 sys 0m0.987s 00:07:29.739 12:54:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.739 12:54:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:29.739 12:54:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:29.739 12:54:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:29.739 12:54:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.739 12:54:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.739 ************************************ 00:07:29.739 START TEST spdk_dd_bdev_to_bdev 00:07:29.739 ************************************ 00:07:29.739 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:29.739 * Looking for test storage... 00:07:29.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.998 --rc genhtml_branch_coverage=1 00:07:29.998 --rc genhtml_function_coverage=1 00:07:29.998 --rc genhtml_legend=1 00:07:29.998 --rc geninfo_all_blocks=1 00:07:29.998 --rc geninfo_unexecuted_blocks=1 00:07:29.998 00:07:29.998 ' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.998 --rc genhtml_branch_coverage=1 00:07:29.998 --rc genhtml_function_coverage=1 00:07:29.998 --rc genhtml_legend=1 00:07:29.998 --rc geninfo_all_blocks=1 00:07:29.998 --rc geninfo_unexecuted_blocks=1 00:07:29.998 00:07:29.998 ' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.998 --rc genhtml_branch_coverage=1 00:07:29.998 --rc genhtml_function_coverage=1 00:07:29.998 --rc genhtml_legend=1 00:07:29.998 --rc geninfo_all_blocks=1 00:07:29.998 --rc geninfo_unexecuted_blocks=1 00:07:29.998 00:07:29.998 ' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.998 --rc genhtml_branch_coverage=1 00:07:29.998 --rc genhtml_function_coverage=1 00:07:29.998 --rc genhtml_legend=1 00:07:29.998 --rc geninfo_all_blocks=1 00:07:29.998 --rc geninfo_unexecuted_blocks=1 00:07:29.998 00:07:29.998 ' 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:29.998 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.999 ************************************ 00:07:29.999 START TEST dd_inflate_file 00:07:29.999 ************************************ 00:07:29.999 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.999 [2024-11-29 12:54:01.425373] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:29.999 [2024-11-29 12:54:01.425639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60884 ] 00:07:30.259 [2024-11-29 12:54:01.566359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.259 [2024-11-29 12:54:01.616374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.259 [2024-11-29 12:54:01.675059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.259  [2024-11-29T12:54:02.052Z] Copying: 64/64 [MB] (average 1280 MBps) 00:07:30.537 00:07:30.537 00:07:30.537 real 0m0.579s 00:07:30.537 user 0m0.327s 00:07:30.537 sys 0m0.331s 00:07:30.537 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.537 12:54:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:30.537 ************************************ 00:07:30.537 END TEST dd_inflate_file 00:07:30.537 ************************************ 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:30.537 ************************************ 00:07:30.537 START TEST dd_copy_to_out_bdev 00:07:30.537 ************************************ 00:07:30.537 12:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:30.808 { 00:07:30.808 "subsystems": [ 00:07:30.808 { 00:07:30.808 "subsystem": "bdev", 00:07:30.808 "config": [ 00:07:30.808 { 00:07:30.808 "params": { 00:07:30.808 "trtype": "pcie", 00:07:30.808 "traddr": "0000:00:10.0", 00:07:30.808 "name": "Nvme0" 00:07:30.808 }, 00:07:30.808 "method": "bdev_nvme_attach_controller" 00:07:30.808 }, 00:07:30.808 { 00:07:30.808 "params": { 00:07:30.808 "trtype": "pcie", 00:07:30.808 "traddr": "0000:00:11.0", 00:07:30.808 "name": "Nvme1" 00:07:30.808 }, 00:07:30.808 "method": "bdev_nvme_attach_controller" 00:07:30.808 }, 00:07:30.808 { 00:07:30.808 "method": "bdev_wait_for_examine" 00:07:30.808 } 00:07:30.808 ] 00:07:30.808 } 00:07:30.808 ] 00:07:30.808 } 00:07:30.808 [2024-11-29 12:54:02.081341] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:30.808 [2024-11-29 12:54:02.081448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60919 ] 00:07:30.808 [2024-11-29 12:54:02.226816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.808 [2024-11-29 12:54:02.284541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.067 [2024-11-29 12:54:02.338966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.005  [2024-11-29T12:54:03.778Z] Copying: 54/64 [MB] (54 MBps) [2024-11-29T12:54:04.036Z] Copying: 64/64 [MB] (average 54 MBps) 00:07:32.521 00:07:32.521 00:07:32.521 real 0m1.903s 00:07:32.521 user 0m1.671s 00:07:32.521 sys 0m1.538s 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.521 ************************************ 00:07:32.521 END TEST dd_copy_to_out_bdev 00:07:32.521 ************************************ 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:32.521 ************************************ 00:07:32.521 START TEST dd_offset_magic 00:07:32.521 ************************************ 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:32.521 12:54:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:32.521 [2024-11-29 12:54:04.030558] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:32.521 [2024-11-29 12:54:04.030791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60965 ] 00:07:32.779 { 00:07:32.779 "subsystems": [ 00:07:32.779 { 00:07:32.779 "subsystem": "bdev", 00:07:32.779 "config": [ 00:07:32.779 { 00:07:32.779 "params": { 00:07:32.779 "trtype": "pcie", 00:07:32.779 "traddr": "0000:00:10.0", 00:07:32.779 "name": "Nvme0" 00:07:32.779 }, 00:07:32.779 "method": "bdev_nvme_attach_controller" 00:07:32.779 }, 00:07:32.779 { 00:07:32.779 "params": { 00:07:32.779 "trtype": "pcie", 00:07:32.779 "traddr": "0000:00:11.0", 00:07:32.779 "name": "Nvme1" 00:07:32.779 }, 00:07:32.779 "method": "bdev_nvme_attach_controller" 00:07:32.779 }, 00:07:32.779 { 00:07:32.779 "method": "bdev_wait_for_examine" 00:07:32.779 } 00:07:32.779 ] 00:07:32.779 } 00:07:32.779 ] 00:07:32.779 } 00:07:32.779 [2024-11-29 12:54:04.172908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.779 [2024-11-29 12:54:04.224971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.779 [2024-11-29 12:54:04.283100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.037  [2024-11-29T12:54:04.811Z] Copying: 65/65 [MB] (average 802 MBps) 00:07:33.296 00:07:33.296 12:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:33.296 12:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:33.296 12:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:33.296 12:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:33.555 [2024-11-29 12:54:04.834173] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:33.555 [2024-11-29 12:54:04.834581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60985 ] 00:07:33.555 { 00:07:33.555 "subsystems": [ 00:07:33.555 { 00:07:33.555 "subsystem": "bdev", 00:07:33.555 "config": [ 00:07:33.555 { 00:07:33.555 "params": { 00:07:33.555 "trtype": "pcie", 00:07:33.555 "traddr": "0000:00:10.0", 00:07:33.555 "name": "Nvme0" 00:07:33.555 }, 00:07:33.555 "method": "bdev_nvme_attach_controller" 00:07:33.555 }, 00:07:33.555 { 00:07:33.555 "params": { 00:07:33.555 "trtype": "pcie", 00:07:33.555 "traddr": "0000:00:11.0", 00:07:33.555 "name": "Nvme1" 00:07:33.555 }, 00:07:33.555 "method": "bdev_nvme_attach_controller" 00:07:33.555 }, 00:07:33.555 { 00:07:33.555 "method": "bdev_wait_for_examine" 00:07:33.555 } 00:07:33.555 ] 00:07:33.555 } 00:07:33.555 ] 00:07:33.555 } 00:07:33.555 [2024-11-29 12:54:04.984621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.555 [2024-11-29 12:54:05.027869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.813 [2024-11-29 12:54:05.085229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.813  [2024-11-29T12:54:05.586Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:34.071 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:34.071 12:54:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:34.071 { 00:07:34.071 "subsystems": [ 00:07:34.071 { 00:07:34.071 "subsystem": "bdev", 00:07:34.071 "config": [ 00:07:34.071 { 00:07:34.071 "params": { 00:07:34.071 "trtype": "pcie", 00:07:34.071 "traddr": "0000:00:10.0", 00:07:34.071 "name": "Nvme0" 00:07:34.071 }, 00:07:34.071 "method": "bdev_nvme_attach_controller" 00:07:34.071 }, 00:07:34.071 { 00:07:34.071 "params": { 00:07:34.071 "trtype": "pcie", 00:07:34.071 "traddr": "0000:00:11.0", 00:07:34.071 "name": "Nvme1" 00:07:34.071 }, 00:07:34.072 "method": "bdev_nvme_attach_controller" 00:07:34.072 }, 00:07:34.072 { 00:07:34.072 "method": "bdev_wait_for_examine" 00:07:34.072 } 00:07:34.072 ] 00:07:34.072 } 00:07:34.072 ] 00:07:34.072 } 00:07:34.072 [2024-11-29 12:54:05.521637] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:34.072 [2024-11-29 12:54:05.521776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:07:34.330 [2024-11-29 12:54:05.671181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.330 [2024-11-29 12:54:05.732453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.330 [2024-11-29 12:54:05.790890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.589  [2024-11-29T12:54:06.363Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:34.848 00:07:34.848 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:34.848 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:34.848 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:34.848 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:34.848 [2024-11-29 12:54:06.335265] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:34.848 [2024-11-29 12:54:06.335362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61016 ] 00:07:34.848 { 00:07:34.848 "subsystems": [ 00:07:34.848 { 00:07:34.848 "subsystem": "bdev", 00:07:34.848 "config": [ 00:07:34.848 { 00:07:34.848 "params": { 00:07:34.848 "trtype": "pcie", 00:07:34.848 "traddr": "0000:00:10.0", 00:07:34.848 "name": "Nvme0" 00:07:34.848 }, 00:07:34.848 "method": "bdev_nvme_attach_controller" 00:07:34.848 }, 00:07:34.848 { 00:07:34.848 "params": { 00:07:34.848 "trtype": "pcie", 00:07:34.848 "traddr": "0000:00:11.0", 00:07:34.848 "name": "Nvme1" 00:07:34.848 }, 00:07:34.848 "method": "bdev_nvme_attach_controller" 00:07:34.848 }, 00:07:34.848 { 00:07:34.848 "method": "bdev_wait_for_examine" 00:07:34.848 } 00:07:34.848 ] 00:07:34.848 } 00:07:34.848 ] 00:07:34.848 } 00:07:35.106 [2024-11-29 12:54:06.481604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.106 [2024-11-29 12:54:06.526413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.106 [2024-11-29 12:54:06.583397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.365  [2024-11-29T12:54:07.138Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:35.623 00:07:35.623 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:35.623 ************************************ 00:07:35.623 END TEST dd_offset_magic 00:07:35.623 ************************************ 00:07:35.623 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:35.623 00:07:35.623 real 0m2.975s 00:07:35.623 user 0m2.124s 00:07:35.623 sys 0m0.947s 00:07:35.623 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.623 12:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:35.623 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:35.624 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:35.624 [2024-11-29 12:54:07.050841] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:35.624 [2024-11-29 12:54:07.050955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61054 ] 00:07:35.624 { 00:07:35.624 "subsystems": [ 00:07:35.624 { 00:07:35.624 "subsystem": "bdev", 00:07:35.624 "config": [ 00:07:35.624 { 00:07:35.624 "params": { 00:07:35.624 "trtype": "pcie", 00:07:35.624 "traddr": "0000:00:10.0", 00:07:35.624 "name": "Nvme0" 00:07:35.624 }, 00:07:35.624 "method": "bdev_nvme_attach_controller" 00:07:35.624 }, 00:07:35.624 { 00:07:35.624 "params": { 00:07:35.624 "trtype": "pcie", 00:07:35.624 "traddr": "0000:00:11.0", 00:07:35.624 "name": "Nvme1" 00:07:35.624 }, 00:07:35.624 "method": "bdev_nvme_attach_controller" 00:07:35.624 }, 00:07:35.624 { 00:07:35.624 "method": "bdev_wait_for_examine" 00:07:35.624 } 00:07:35.624 ] 00:07:35.624 } 00:07:35.624 ] 00:07:35.624 } 00:07:35.883 [2024-11-29 12:54:07.188476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.883 [2024-11-29 12:54:07.232620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.883 [2024-11-29 12:54:07.286090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.143  [2024-11-29T12:54:07.658Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:36.143 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:36.402 12:54:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.402 { 00:07:36.402 "subsystems": [ 00:07:36.402 { 00:07:36.402 "subsystem": "bdev", 00:07:36.402 "config": [ 00:07:36.402 { 00:07:36.402 "params": { 00:07:36.402 "trtype": "pcie", 00:07:36.402 "traddr": "0000:00:10.0", 00:07:36.402 "name": "Nvme0" 00:07:36.402 }, 00:07:36.402 "method": "bdev_nvme_attach_controller" 00:07:36.402 }, 00:07:36.402 { 00:07:36.402 "params": { 00:07:36.402 "trtype": "pcie", 00:07:36.402 "traddr": "0000:00:11.0", 00:07:36.402 "name": "Nvme1" 00:07:36.402 }, 00:07:36.402 "method": "bdev_nvme_attach_controller" 00:07:36.402 }, 00:07:36.402 { 00:07:36.402 "method": "bdev_wait_for_examine" 00:07:36.402 } 00:07:36.402 ] 00:07:36.402 } 00:07:36.402 ] 00:07:36.402 } 00:07:36.402 [2024-11-29 12:54:07.729361] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:36.402 [2024-11-29 12:54:07.729490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61070 ] 00:07:36.402 [2024-11-29 12:54:07.876879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.661 [2024-11-29 12:54:07.926766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.661 [2024-11-29 12:54:07.981316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.661  [2024-11-29T12:54:08.435Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:36.920 00:07:36.920 12:54:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:36.920 00:07:36.920 real 0m7.194s 00:07:36.920 user 0m5.257s 00:07:36.920 sys 0m3.533s 00:07:36.920 12:54:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.920 ************************************ 00:07:36.920 END TEST spdk_dd_bdev_to_bdev 00:07:36.920 ************************************ 00:07:36.920 12:54:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 12:54:08 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:36.920 12:54:08 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:36.920 12:54:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.920 12:54:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.920 12:54:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 ************************************ 00:07:36.920 START TEST spdk_dd_uring 00:07:36.920 ************************************ 00:07:36.920 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:37.180 * Looking for test storage... 00:07:37.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.180 --rc genhtml_branch_coverage=1 00:07:37.180 --rc genhtml_function_coverage=1 00:07:37.180 --rc genhtml_legend=1 00:07:37.180 --rc geninfo_all_blocks=1 00:07:37.180 --rc geninfo_unexecuted_blocks=1 00:07:37.180 00:07:37.180 ' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.180 --rc genhtml_branch_coverage=1 00:07:37.180 --rc genhtml_function_coverage=1 00:07:37.180 --rc genhtml_legend=1 00:07:37.180 --rc geninfo_all_blocks=1 00:07:37.180 --rc geninfo_unexecuted_blocks=1 00:07:37.180 00:07:37.180 ' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.180 --rc genhtml_branch_coverage=1 00:07:37.180 --rc genhtml_function_coverage=1 00:07:37.180 --rc genhtml_legend=1 00:07:37.180 --rc geninfo_all_blocks=1 00:07:37.180 --rc geninfo_unexecuted_blocks=1 00:07:37.180 00:07:37.180 ' 00:07:37.180 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.180 --rc genhtml_branch_coverage=1 00:07:37.180 --rc genhtml_function_coverage=1 00:07:37.180 --rc genhtml_legend=1 00:07:37.180 --rc geninfo_all_blocks=1 00:07:37.181 --rc geninfo_unexecuted_blocks=1 00:07:37.181 00:07:37.181 ' 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:37.181 ************************************ 00:07:37.181 START TEST dd_uring_copy 00:07:37.181 ************************************ 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=v77xqo3iiur15m1zxf73uknjuknee185oaa1bhuiw67kagn2b2h0nghjg59m5gq3ia13rg8nzbjri53sd1z208rn1p53okw2myeszl6uyzli9ny108cyc9php3m6wjg3ic7i6l6rl21u2h30fwyuccz0t3dks224pfoau0hbnjx7d223kxdz7uq5dkc7n5ws12t89yrc9e51u8vemjigju2ko757y70pqdrofnk4rbn2nm6580fsvhx32fmro67wq3gcbols4ftu34wca2j2u3677ww5lw07tfux0b5d003dvu3ifxxbgir7jbene1879bb2vrypgxhsen8457i498fzpwowd9zqcljmu8a6zhwtxfwtl2hkq2pmaa59z0qgrh27xyc6hb4a4b07l6t0mqm8pfoj3d8pkbvanp07ltxqkxblxyszg7nr8mmjwtb3xiqk035py4ibcsz5m3vxnml6toxhuzam59pe4w5x1k4ulo5pk61y0gp9zm9h210yv3920qqzcki7txxhnvx1bhuaz80ewnpwxnsjcru3081obgn1y1e87lfgk4s3nua1x7e5q6079vmlfysurma9todfu3io9rhtklqb62cs51httoj3repbpny5f28t8mtjl0qd8fair1xmbcne23r7vrw5x9dbjqjqeswjqheiimqgkl7ojj4n1vsrwffjlcotgdxt543uuq3ne8cljv70b9cdqgyp95mfg81rbe6ec7f4f9ucebtiitlaxv0vo5yyj249aeml9klfoznjqx9o4adtvsmrolt1es6v7i2ugngclmxv25va02yr284wlfszungdnef1dq7fubzzdqcxabuwsfo548b4ikaroh64a4iv1iwcmz0of8n3behmmcj5kgbddi7gn8gfwbtmcgb0u6o3qcodx4q5glaw9myut9vpr0liy7otr9qer5h6abv9dhuwne9hx4e7218j99eomqn1ncttd33puwuvh3njiizuojcj375vxikefutxterb 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo v77xqo3iiur15m1zxf73uknjuknee185oaa1bhuiw67kagn2b2h0nghjg59m5gq3ia13rg8nzbjri53sd1z208rn1p53okw2myeszl6uyzli9ny108cyc9php3m6wjg3ic7i6l6rl21u2h30fwyuccz0t3dks224pfoau0hbnjx7d223kxdz7uq5dkc7n5ws12t89yrc9e51u8vemjigju2ko757y70pqdrofnk4rbn2nm6580fsvhx32fmro67wq3gcbols4ftu34wca2j2u3677ww5lw07tfux0b5d003dvu3ifxxbgir7jbene1879bb2vrypgxhsen8457i498fzpwowd9zqcljmu8a6zhwtxfwtl2hkq2pmaa59z0qgrh27xyc6hb4a4b07l6t0mqm8pfoj3d8pkbvanp07ltxqkxblxyszg7nr8mmjwtb3xiqk035py4ibcsz5m3vxnml6toxhuzam59pe4w5x1k4ulo5pk61y0gp9zm9h210yv3920qqzcki7txxhnvx1bhuaz80ewnpwxnsjcru3081obgn1y1e87lfgk4s3nua1x7e5q6079vmlfysurma9todfu3io9rhtklqb62cs51httoj3repbpny5f28t8mtjl0qd8fair1xmbcne23r7vrw5x9dbjqjqeswjqheiimqgkl7ojj4n1vsrwffjlcotgdxt543uuq3ne8cljv70b9cdqgyp95mfg81rbe6ec7f4f9ucebtiitlaxv0vo5yyj249aeml9klfoznjqx9o4adtvsmrolt1es6v7i2ugngclmxv25va02yr284wlfszungdnef1dq7fubzzdqcxabuwsfo548b4ikaroh64a4iv1iwcmz0of8n3behmmcj5kgbddi7gn8gfwbtmcgb0u6o3qcodx4q5glaw9myut9vpr0liy7otr9qer5h6abv9dhuwne9hx4e7218j99eomqn1ncttd33puwuvh3njiizuojcj375vxikefutxterb 00:07:37.181 12:54:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:37.441 [2024-11-29 12:54:08.706367] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:37.441 [2024-11-29 12:54:08.706642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:07:37.441 [2024-11-29 12:54:08.853309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.441 [2024-11-29 12:54:08.898404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.441 [2024-11-29 12:54:08.951599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.377  [2024-11-29T12:54:10.151Z] Copying: 511/511 [MB] (average 1471 MBps) 00:07:38.636 00:07:38.636 12:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:38.636 12:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:38.636 12:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:38.636 12:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.636 [2024-11-29 12:54:09.962697] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:38.636 [2024-11-29 12:54:09.962766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:07:38.636 { 00:07:38.636 "subsystems": [ 00:07:38.636 { 00:07:38.636 "subsystem": "bdev", 00:07:38.636 "config": [ 00:07:38.636 { 00:07:38.636 "params": { 00:07:38.636 "block_size": 512, 00:07:38.636 "num_blocks": 1048576, 00:07:38.636 "name": "malloc0" 00:07:38.636 }, 00:07:38.636 "method": "bdev_malloc_create" 00:07:38.636 }, 00:07:38.636 { 00:07:38.636 "params": { 00:07:38.636 "filename": "/dev/zram1", 00:07:38.636 "name": "uring0" 00:07:38.636 }, 00:07:38.636 "method": "bdev_uring_create" 00:07:38.636 }, 00:07:38.636 { 00:07:38.636 "method": "bdev_wait_for_examine" 00:07:38.636 } 00:07:38.636 ] 00:07:38.636 } 00:07:38.636 ] 00:07:38.636 } 00:07:38.636 [2024-11-29 12:54:10.100844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.894 [2024-11-29 12:54:10.154308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.894 [2024-11-29 12:54:10.211273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.271  [2024-11-29T12:54:12.724Z] Copying: 219/512 [MB] (219 MBps) [2024-11-29T12:54:12.984Z] Copying: 437/512 [MB] (217 MBps) [2024-11-29T12:54:13.243Z] Copying: 512/512 [MB] (average 218 MBps) 00:07:41.728 00:07:41.728 12:54:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:41.728 12:54:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:41.728 12:54:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:41.728 12:54:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.728 [2024-11-29 12:54:13.224070] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:41.728 [2024-11-29 12:54:13.224190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61213 ] 00:07:41.728 { 00:07:41.728 "subsystems": [ 00:07:41.728 { 00:07:41.728 "subsystem": "bdev", 00:07:41.728 "config": [ 00:07:41.728 { 00:07:41.728 "params": { 00:07:41.728 "block_size": 512, 00:07:41.728 "num_blocks": 1048576, 00:07:41.728 "name": "malloc0" 00:07:41.728 }, 00:07:41.728 "method": "bdev_malloc_create" 00:07:41.728 }, 00:07:41.728 { 00:07:41.728 "params": { 00:07:41.728 "filename": "/dev/zram1", 00:07:41.728 "name": "uring0" 00:07:41.728 }, 00:07:41.728 "method": "bdev_uring_create" 00:07:41.728 }, 00:07:41.728 { 00:07:41.728 "method": "bdev_wait_for_examine" 00:07:41.728 } 00:07:41.728 ] 00:07:41.728 } 00:07:41.728 ] 00:07:41.728 } 00:07:41.988 [2024-11-29 12:54:13.372451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.988 [2024-11-29 12:54:13.419674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.988 [2024-11-29 12:54:13.475931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.374  [2024-11-29T12:54:15.837Z] Copying: 178/512 [MB] (178 MBps) [2024-11-29T12:54:16.774Z] Copying: 332/512 [MB] (153 MBps) [2024-11-29T12:54:17.033Z] Copying: 469/512 [MB] (137 MBps) [2024-11-29T12:54:17.602Z] Copying: 512/512 [MB] (average 154 MBps) 00:07:46.087 00:07:46.087 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:46.088 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ v77xqo3iiur15m1zxf73uknjuknee185oaa1bhuiw67kagn2b2h0nghjg59m5gq3ia13rg8nzbjri53sd1z208rn1p53okw2myeszl6uyzli9ny108cyc9php3m6wjg3ic7i6l6rl21u2h30fwyuccz0t3dks224pfoau0hbnjx7d223kxdz7uq5dkc7n5ws12t89yrc9e51u8vemjigju2ko757y70pqdrofnk4rbn2nm6580fsvhx32fmro67wq3gcbols4ftu34wca2j2u3677ww5lw07tfux0b5d003dvu3ifxxbgir7jbene1879bb2vrypgxhsen8457i498fzpwowd9zqcljmu8a6zhwtxfwtl2hkq2pmaa59z0qgrh27xyc6hb4a4b07l6t0mqm8pfoj3d8pkbvanp07ltxqkxblxyszg7nr8mmjwtb3xiqk035py4ibcsz5m3vxnml6toxhuzam59pe4w5x1k4ulo5pk61y0gp9zm9h210yv3920qqzcki7txxhnvx1bhuaz80ewnpwxnsjcru3081obgn1y1e87lfgk4s3nua1x7e5q6079vmlfysurma9todfu3io9rhtklqb62cs51httoj3repbpny5f28t8mtjl0qd8fair1xmbcne23r7vrw5x9dbjqjqeswjqheiimqgkl7ojj4n1vsrwffjlcotgdxt543uuq3ne8cljv70b9cdqgyp95mfg81rbe6ec7f4f9ucebtiitlaxv0vo5yyj249aeml9klfoznjqx9o4adtvsmrolt1es6v7i2ugngclmxv25va02yr284wlfszungdnef1dq7fubzzdqcxabuwsfo548b4ikaroh64a4iv1iwcmz0of8n3behmmcj5kgbddi7gn8gfwbtmcgb0u6o3qcodx4q5glaw9myut9vpr0liy7otr9qer5h6abv9dhuwne9hx4e7218j99eomqn1ncttd33puwuvh3njiizuojcj375vxikefutxterb == \v\7\7\x\q\o\3\i\i\u\r\1\5\m\1\z\x\f\7\3\u\k\n\j\u\k\n\e\e\1\8\5\o\a\a\1\b\h\u\i\w\6\7\k\a\g\n\2\b\2\h\0\n\g\h\j\g\5\9\m\5\g\q\3\i\a\1\3\r\g\8\n\z\b\j\r\i\5\3\s\d\1\z\2\0\8\r\n\1\p\5\3\o\k\w\2\m\y\e\s\z\l\6\u\y\z\l\i\9\n\y\1\0\8\c\y\c\9\p\h\p\3\m\6\w\j\g\3\i\c\7\i\6\l\6\r\l\2\1\u\2\h\3\0\f\w\y\u\c\c\z\0\t\3\d\k\s\2\2\4\p\f\o\a\u\0\h\b\n\j\x\7\d\2\2\3\k\x\d\z\7\u\q\5\d\k\c\7\n\5\w\s\1\2\t\8\9\y\r\c\9\e\5\1\u\8\v\e\m\j\i\g\j\u\2\k\o\7\5\7\y\7\0\p\q\d\r\o\f\n\k\4\r\b\n\2\n\m\6\5\8\0\f\s\v\h\x\3\2\f\m\r\o\6\7\w\q\3\g\c\b\o\l\s\4\f\t\u\3\4\w\c\a\2\j\2\u\3\6\7\7\w\w\5\l\w\0\7\t\f\u\x\0\b\5\d\0\0\3\d\v\u\3\i\f\x\x\b\g\i\r\7\j\b\e\n\e\1\8\7\9\b\b\2\v\r\y\p\g\x\h\s\e\n\8\4\5\7\i\4\9\8\f\z\p\w\o\w\d\9\z\q\c\l\j\m\u\8\a\6\z\h\w\t\x\f\w\t\l\2\h\k\q\2\p\m\a\a\5\9\z\0\q\g\r\h\2\7\x\y\c\6\h\b\4\a\4\b\0\7\l\6\t\0\m\q\m\8\p\f\o\j\3\d\8\p\k\b\v\a\n\p\0\7\l\t\x\q\k\x\b\l\x\y\s\z\g\7\n\r\8\m\m\j\w\t\b\3\x\i\q\k\0\3\5\p\y\4\i\b\c\s\z\5\m\3\v\x\n\m\l\6\t\o\x\h\u\z\a\m\5\9\p\e\4\w\5\x\1\k\4\u\l\o\5\p\k\6\1\y\0\g\p\9\z\m\9\h\2\1\0\y\v\3\9\2\0\q\q\z\c\k\i\7\t\x\x\h\n\v\x\1\b\h\u\a\z\8\0\e\w\n\p\w\x\n\s\j\c\r\u\3\0\8\1\o\b\g\n\1\y\1\e\8\7\l\f\g\k\4\s\3\n\u\a\1\x\7\e\5\q\6\0\7\9\v\m\l\f\y\s\u\r\m\a\9\t\o\d\f\u\3\i\o\9\r\h\t\k\l\q\b\6\2\c\s\5\1\h\t\t\o\j\3\r\e\p\b\p\n\y\5\f\2\8\t\8\m\t\j\l\0\q\d\8\f\a\i\r\1\x\m\b\c\n\e\2\3\r\7\v\r\w\5\x\9\d\b\j\q\j\q\e\s\w\j\q\h\e\i\i\m\q\g\k\l\7\o\j\j\4\n\1\v\s\r\w\f\f\j\l\c\o\t\g\d\x\t\5\4\3\u\u\q\3\n\e\8\c\l\j\v\7\0\b\9\c\d\q\g\y\p\9\5\m\f\g\8\1\r\b\e\6\e\c\7\f\4\f\9\u\c\e\b\t\i\i\t\l\a\x\v\0\v\o\5\y\y\j\2\4\9\a\e\m\l\9\k\l\f\o\z\n\j\q\x\9\o\4\a\d\t\v\s\m\r\o\l\t\1\e\s\6\v\7\i\2\u\g\n\g\c\l\m\x\v\2\5\v\a\0\2\y\r\2\8\4\w\l\f\s\z\u\n\g\d\n\e\f\1\d\q\7\f\u\b\z\z\d\q\c\x\a\b\u\w\s\f\o\5\4\8\b\4\i\k\a\r\o\h\6\4\a\4\i\v\1\i\w\c\m\z\0\o\f\8\n\3\b\e\h\m\m\c\j\5\k\g\b\d\d\i\7\g\n\8\g\f\w\b\t\m\c\g\b\0\u\6\o\3\q\c\o\d\x\4\q\5\g\l\a\w\9\m\y\u\t\9\v\p\r\0\l\i\y\7\o\t\r\9\q\e\r\5\h\6\a\b\v\9\d\h\u\w\n\e\9\h\x\4\e\7\2\1\8\j\9\9\e\o\m\q\n\1\n\c\t\t\d\3\3\p\u\w\u\v\h\3\n\j\i\i\z\u\o\j\c\j\3\7\5\v\x\i\k\e\f\u\t\x\t\e\r\b ]] 00:07:46.088 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:46.088 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ v77xqo3iiur15m1zxf73uknjuknee185oaa1bhuiw67kagn2b2h0nghjg59m5gq3ia13rg8nzbjri53sd1z208rn1p53okw2myeszl6uyzli9ny108cyc9php3m6wjg3ic7i6l6rl21u2h30fwyuccz0t3dks224pfoau0hbnjx7d223kxdz7uq5dkc7n5ws12t89yrc9e51u8vemjigju2ko757y70pqdrofnk4rbn2nm6580fsvhx32fmro67wq3gcbols4ftu34wca2j2u3677ww5lw07tfux0b5d003dvu3ifxxbgir7jbene1879bb2vrypgxhsen8457i498fzpwowd9zqcljmu8a6zhwtxfwtl2hkq2pmaa59z0qgrh27xyc6hb4a4b07l6t0mqm8pfoj3d8pkbvanp07ltxqkxblxyszg7nr8mmjwtb3xiqk035py4ibcsz5m3vxnml6toxhuzam59pe4w5x1k4ulo5pk61y0gp9zm9h210yv3920qqzcki7txxhnvx1bhuaz80ewnpwxnsjcru3081obgn1y1e87lfgk4s3nua1x7e5q6079vmlfysurma9todfu3io9rhtklqb62cs51httoj3repbpny5f28t8mtjl0qd8fair1xmbcne23r7vrw5x9dbjqjqeswjqheiimqgkl7ojj4n1vsrwffjlcotgdxt543uuq3ne8cljv70b9cdqgyp95mfg81rbe6ec7f4f9ucebtiitlaxv0vo5yyj249aeml9klfoznjqx9o4adtvsmrolt1es6v7i2ugngclmxv25va02yr284wlfszungdnef1dq7fubzzdqcxabuwsfo548b4ikaroh64a4iv1iwcmz0of8n3behmmcj5kgbddi7gn8gfwbtmcgb0u6o3qcodx4q5glaw9myut9vpr0liy7otr9qer5h6abv9dhuwne9hx4e7218j99eomqn1ncttd33puwuvh3njiizuojcj375vxikefutxterb == \v\7\7\x\q\o\3\i\i\u\r\1\5\m\1\z\x\f\7\3\u\k\n\j\u\k\n\e\e\1\8\5\o\a\a\1\b\h\u\i\w\6\7\k\a\g\n\2\b\2\h\0\n\g\h\j\g\5\9\m\5\g\q\3\i\a\1\3\r\g\8\n\z\b\j\r\i\5\3\s\d\1\z\2\0\8\r\n\1\p\5\3\o\k\w\2\m\y\e\s\z\l\6\u\y\z\l\i\9\n\y\1\0\8\c\y\c\9\p\h\p\3\m\6\w\j\g\3\i\c\7\i\6\l\6\r\l\2\1\u\2\h\3\0\f\w\y\u\c\c\z\0\t\3\d\k\s\2\2\4\p\f\o\a\u\0\h\b\n\j\x\7\d\2\2\3\k\x\d\z\7\u\q\5\d\k\c\7\n\5\w\s\1\2\t\8\9\y\r\c\9\e\5\1\u\8\v\e\m\j\i\g\j\u\2\k\o\7\5\7\y\7\0\p\q\d\r\o\f\n\k\4\r\b\n\2\n\m\6\5\8\0\f\s\v\h\x\3\2\f\m\r\o\6\7\w\q\3\g\c\b\o\l\s\4\f\t\u\3\4\w\c\a\2\j\2\u\3\6\7\7\w\w\5\l\w\0\7\t\f\u\x\0\b\5\d\0\0\3\d\v\u\3\i\f\x\x\b\g\i\r\7\j\b\e\n\e\1\8\7\9\b\b\2\v\r\y\p\g\x\h\s\e\n\8\4\5\7\i\4\9\8\f\z\p\w\o\w\d\9\z\q\c\l\j\m\u\8\a\6\z\h\w\t\x\f\w\t\l\2\h\k\q\2\p\m\a\a\5\9\z\0\q\g\r\h\2\7\x\y\c\6\h\b\4\a\4\b\0\7\l\6\t\0\m\q\m\8\p\f\o\j\3\d\8\p\k\b\v\a\n\p\0\7\l\t\x\q\k\x\b\l\x\y\s\z\g\7\n\r\8\m\m\j\w\t\b\3\x\i\q\k\0\3\5\p\y\4\i\b\c\s\z\5\m\3\v\x\n\m\l\6\t\o\x\h\u\z\a\m\5\9\p\e\4\w\5\x\1\k\4\u\l\o\5\p\k\6\1\y\0\g\p\9\z\m\9\h\2\1\0\y\v\3\9\2\0\q\q\z\c\k\i\7\t\x\x\h\n\v\x\1\b\h\u\a\z\8\0\e\w\n\p\w\x\n\s\j\c\r\u\3\0\8\1\o\b\g\n\1\y\1\e\8\7\l\f\g\k\4\s\3\n\u\a\1\x\7\e\5\q\6\0\7\9\v\m\l\f\y\s\u\r\m\a\9\t\o\d\f\u\3\i\o\9\r\h\t\k\l\q\b\6\2\c\s\5\1\h\t\t\o\j\3\r\e\p\b\p\n\y\5\f\2\8\t\8\m\t\j\l\0\q\d\8\f\a\i\r\1\x\m\b\c\n\e\2\3\r\7\v\r\w\5\x\9\d\b\j\q\j\q\e\s\w\j\q\h\e\i\i\m\q\g\k\l\7\o\j\j\4\n\1\v\s\r\w\f\f\j\l\c\o\t\g\d\x\t\5\4\3\u\u\q\3\n\e\8\c\l\j\v\7\0\b\9\c\d\q\g\y\p\9\5\m\f\g\8\1\r\b\e\6\e\c\7\f\4\f\9\u\c\e\b\t\i\i\t\l\a\x\v\0\v\o\5\y\y\j\2\4\9\a\e\m\l\9\k\l\f\o\z\n\j\q\x\9\o\4\a\d\t\v\s\m\r\o\l\t\1\e\s\6\v\7\i\2\u\g\n\g\c\l\m\x\v\2\5\v\a\0\2\y\r\2\8\4\w\l\f\s\z\u\n\g\d\n\e\f\1\d\q\7\f\u\b\z\z\d\q\c\x\a\b\u\w\s\f\o\5\4\8\b\4\i\k\a\r\o\h\6\4\a\4\i\v\1\i\w\c\m\z\0\o\f\8\n\3\b\e\h\m\m\c\j\5\k\g\b\d\d\i\7\g\n\8\g\f\w\b\t\m\c\g\b\0\u\6\o\3\q\c\o\d\x\4\q\5\g\l\a\w\9\m\y\u\t\9\v\p\r\0\l\i\y\7\o\t\r\9\q\e\r\5\h\6\a\b\v\9\d\h\u\w\n\e\9\h\x\4\e\7\2\1\8\j\9\9\e\o\m\q\n\1\n\c\t\t\d\3\3\p\u\w\u\v\h\3\n\j\i\i\z\u\o\j\c\j\3\7\5\v\x\i\k\e\f\u\t\x\t\e\r\b ]] 00:07:46.088 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:46.657 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:46.657 12:54:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:46.657 12:54:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:46.657 12:54:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.657 [2024-11-29 12:54:18.048014] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:46.657 [2024-11-29 12:54:18.048115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61291 ] 00:07:46.657 { 00:07:46.657 "subsystems": [ 00:07:46.657 { 00:07:46.657 "subsystem": "bdev", 00:07:46.657 "config": [ 00:07:46.657 { 00:07:46.657 "params": { 00:07:46.657 "block_size": 512, 00:07:46.657 "num_blocks": 1048576, 00:07:46.657 "name": "malloc0" 00:07:46.657 }, 00:07:46.657 "method": "bdev_malloc_create" 00:07:46.657 }, 00:07:46.657 { 00:07:46.657 "params": { 00:07:46.657 "filename": "/dev/zram1", 00:07:46.657 "name": "uring0" 00:07:46.657 }, 00:07:46.657 "method": "bdev_uring_create" 00:07:46.657 }, 00:07:46.657 { 00:07:46.657 "method": "bdev_wait_for_examine" 00:07:46.657 } 00:07:46.657 ] 00:07:46.657 } 00:07:46.657 ] 00:07:46.657 } 00:07:46.917 [2024-11-29 12:54:18.193078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.917 [2024-11-29 12:54:18.282069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.917 [2024-11-29 12:54:18.363993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.319  [2024-11-29T12:54:20.773Z] Copying: 151/512 [MB] (151 MBps) [2024-11-29T12:54:21.710Z] Copying: 305/512 [MB] (153 MBps) [2024-11-29T12:54:22.278Z] Copying: 460/512 [MB] (154 MBps) [2024-11-29T12:54:22.846Z] Copying: 512/512 [MB] (average 153 MBps) 00:07:51.331 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:51.331 12:54:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 [2024-11-29 12:54:22.652836] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:51.331 [2024-11-29 12:54:22.653153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61363 ] 00:07:51.331 { 00:07:51.331 "subsystems": [ 00:07:51.331 { 00:07:51.331 "subsystem": "bdev", 00:07:51.331 "config": [ 00:07:51.331 { 00:07:51.331 "params": { 00:07:51.331 "block_size": 512, 00:07:51.331 "num_blocks": 1048576, 00:07:51.331 "name": "malloc0" 00:07:51.331 }, 00:07:51.331 "method": "bdev_malloc_create" 00:07:51.331 }, 00:07:51.331 { 00:07:51.331 "params": { 00:07:51.331 "filename": "/dev/zram1", 00:07:51.331 "name": "uring0" 00:07:51.331 }, 00:07:51.331 "method": "bdev_uring_create" 00:07:51.331 }, 00:07:51.331 { 00:07:51.331 "params": { 00:07:51.331 "name": "uring0" 00:07:51.331 }, 00:07:51.331 "method": "bdev_uring_delete" 00:07:51.331 }, 00:07:51.331 { 00:07:51.331 "method": "bdev_wait_for_examine" 00:07:51.331 } 00:07:51.331 ] 00:07:51.331 } 00:07:51.331 ] 00:07:51.331 } 00:07:51.331 [2024-11-29 12:54:22.798453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.590 [2024-11-29 12:54:22.878674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.590 [2024-11-29 12:54:22.969150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.849  [2024-11-29T12:54:23.933Z] Copying: 0/0 [B] (average 0 Bps) 00:07:52.418 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.418 12:54:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:52.677 [2024-11-29 12:54:23.947958] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:52.677 [2024-11-29 12:54:23.948077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:07:52.677 { 00:07:52.677 "subsystems": [ 00:07:52.677 { 00:07:52.677 "subsystem": "bdev", 00:07:52.677 "config": [ 00:07:52.677 { 00:07:52.677 "params": { 00:07:52.677 "block_size": 512, 00:07:52.677 "num_blocks": 1048576, 00:07:52.677 "name": "malloc0" 00:07:52.677 }, 00:07:52.677 "method": "bdev_malloc_create" 00:07:52.677 }, 00:07:52.677 { 00:07:52.677 "params": { 00:07:52.677 "filename": "/dev/zram1", 00:07:52.677 "name": "uring0" 00:07:52.677 }, 00:07:52.677 "method": "bdev_uring_create" 00:07:52.677 }, 00:07:52.677 { 00:07:52.677 "params": { 00:07:52.677 "name": "uring0" 00:07:52.677 }, 00:07:52.677 "method": "bdev_uring_delete" 00:07:52.677 }, 00:07:52.677 { 00:07:52.677 "method": "bdev_wait_for_examine" 00:07:52.677 } 00:07:52.677 ] 00:07:52.677 } 00:07:52.677 ] 00:07:52.677 } 00:07:52.677 [2024-11-29 12:54:24.092297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.677 [2024-11-29 12:54:24.158057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.936 [2024-11-29 12:54:24.238267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.195 [2024-11-29 12:54:24.531342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:53.195 [2024-11-29 12:54:24.531433] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:53.195 [2024-11-29 12:54:24.531460] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:53.195 [2024-11-29 12:54:24.531470] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.774 [2024-11-29 12:54:25.054458] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:53.774 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:54.033 00:07:54.033 real 0m16.793s 00:07:54.033 user 0m11.321s 00:07:54.033 sys 0m13.992s 00:07:54.033 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.033 12:54:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 ************************************ 00:07:54.033 END TEST dd_uring_copy 00:07:54.033 ************************************ 00:07:54.033 ************************************ 00:07:54.033 END TEST spdk_dd_uring 00:07:54.033 ************************************ 00:07:54.033 00:07:54.033 real 0m17.035s 00:07:54.033 user 0m11.460s 00:07:54.033 sys 0m14.100s 00:07:54.033 12:54:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.033 12:54:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 12:54:25 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:54.033 12:54:25 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.033 12:54:25 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.033 12:54:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 ************************************ 00:07:54.033 START TEST spdk_dd_sparse 00:07:54.033 ************************************ 00:07:54.033 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:54.293 * Looking for test storage... 00:07:54.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.293 --rc genhtml_branch_coverage=1 00:07:54.293 --rc genhtml_function_coverage=1 00:07:54.293 --rc genhtml_legend=1 00:07:54.293 --rc geninfo_all_blocks=1 00:07:54.293 --rc geninfo_unexecuted_blocks=1 00:07:54.293 00:07:54.293 ' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.293 --rc genhtml_branch_coverage=1 00:07:54.293 --rc genhtml_function_coverage=1 00:07:54.293 --rc genhtml_legend=1 00:07:54.293 --rc geninfo_all_blocks=1 00:07:54.293 --rc geninfo_unexecuted_blocks=1 00:07:54.293 00:07:54.293 ' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.293 --rc genhtml_branch_coverage=1 00:07:54.293 --rc genhtml_function_coverage=1 00:07:54.293 --rc genhtml_legend=1 00:07:54.293 --rc geninfo_all_blocks=1 00:07:54.293 --rc geninfo_unexecuted_blocks=1 00:07:54.293 00:07:54.293 ' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.293 --rc genhtml_branch_coverage=1 00:07:54.293 --rc genhtml_function_coverage=1 00:07:54.293 --rc genhtml_legend=1 00:07:54.293 --rc geninfo_all_blocks=1 00:07:54.293 --rc geninfo_unexecuted_blocks=1 00:07:54.293 00:07:54.293 ' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:54.293 1+0 records in 00:07:54.293 1+0 records out 00:07:54.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00893134 s, 470 MB/s 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:54.293 1+0 records in 00:07:54.293 1+0 records out 00:07:54.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00718734 s, 584 MB/s 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:54.293 1+0 records in 00:07:54.293 1+0 records out 00:07:54.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0115549 s, 363 MB/s 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:54.293 ************************************ 00:07:54.293 START TEST dd_sparse_file_to_file 00:07:54.293 ************************************ 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:54.293 12:54:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 [2024-11-29 12:54:25.838936] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:54.552 [2024-11-29 12:54:25.839226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:07:54.552 { 00:07:54.552 "subsystems": [ 00:07:54.552 { 00:07:54.552 "subsystem": "bdev", 00:07:54.552 "config": [ 00:07:54.552 { 00:07:54.552 "params": { 00:07:54.552 "block_size": 4096, 00:07:54.552 "filename": "dd_sparse_aio_disk", 00:07:54.552 "name": "dd_aio" 00:07:54.552 }, 00:07:54.552 "method": "bdev_aio_create" 00:07:54.552 }, 00:07:54.552 { 00:07:54.552 "params": { 00:07:54.552 "lvs_name": "dd_lvstore", 00:07:54.552 "bdev_name": "dd_aio" 00:07:54.552 }, 00:07:54.552 "method": "bdev_lvol_create_lvstore" 00:07:54.552 }, 00:07:54.552 { 00:07:54.552 "method": "bdev_wait_for_examine" 00:07:54.552 } 00:07:54.552 ] 00:07:54.552 } 00:07:54.552 ] 00:07:54.552 } 00:07:54.552 [2024-11-29 12:54:25.985112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.552 [2024-11-29 12:54:26.062548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.811 [2024-11-29 12:54:26.149372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.811  [2024-11-29T12:54:26.908Z] Copying: 12/36 [MB] (average 631 MBps) 00:07:55.393 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:55.393 ************************************ 00:07:55.393 END TEST dd_sparse_file_to_file 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:55.393 00:07:55.393 real 0m0.882s 00:07:55.393 user 0m0.539s 00:07:55.393 sys 0m0.532s 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:55.393 ************************************ 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:55.393 ************************************ 00:07:55.393 START TEST dd_sparse_file_to_bdev 00:07:55.393 ************************************ 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:55.393 12:54:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.393 [2024-11-29 12:54:26.776343] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:55.393 [2024-11-29 12:54:26.776457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61540 ] 00:07:55.393 { 00:07:55.393 "subsystems": [ 00:07:55.393 { 00:07:55.393 "subsystem": "bdev", 00:07:55.393 "config": [ 00:07:55.393 { 00:07:55.393 "params": { 00:07:55.393 "block_size": 4096, 00:07:55.393 "filename": "dd_sparse_aio_disk", 00:07:55.393 "name": "dd_aio" 00:07:55.393 }, 00:07:55.393 "method": "bdev_aio_create" 00:07:55.393 }, 00:07:55.393 { 00:07:55.393 "params": { 00:07:55.393 "lvs_name": "dd_lvstore", 00:07:55.393 "lvol_name": "dd_lvol", 00:07:55.393 "size_in_mib": 36, 00:07:55.393 "thin_provision": true 00:07:55.393 }, 00:07:55.393 "method": "bdev_lvol_create" 00:07:55.393 }, 00:07:55.393 { 00:07:55.393 "method": "bdev_wait_for_examine" 00:07:55.393 } 00:07:55.393 ] 00:07:55.393 } 00:07:55.393 ] 00:07:55.393 } 00:07:55.662 [2024-11-29 12:54:26.921801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.662 [2024-11-29 12:54:26.983270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.662 [2024-11-29 12:54:27.059118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.920  [2024-11-29T12:54:27.694Z] Copying: 12/36 [MB] (average 444 MBps) 00:07:56.179 00:07:56.179 00:07:56.179 real 0m0.770s 00:07:56.179 user 0m0.489s 00:07:56.179 sys 0m0.455s 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:56.179 ************************************ 00:07:56.179 END TEST dd_sparse_file_to_bdev 00:07:56.179 ************************************ 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:56.179 ************************************ 00:07:56.179 START TEST dd_sparse_bdev_to_file 00:07:56.179 ************************************ 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:56.179 12:54:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:56.179 { 00:07:56.179 "subsystems": [ 00:07:56.179 { 00:07:56.179 "subsystem": "bdev", 00:07:56.179 "config": [ 00:07:56.179 { 00:07:56.179 "params": { 00:07:56.179 "block_size": 4096, 00:07:56.179 "filename": "dd_sparse_aio_disk", 00:07:56.179 "name": "dd_aio" 00:07:56.179 }, 00:07:56.179 "method": "bdev_aio_create" 00:07:56.179 }, 00:07:56.179 { 00:07:56.179 "method": "bdev_wait_for_examine" 00:07:56.179 } 00:07:56.179 ] 00:07:56.179 } 00:07:56.179 ] 00:07:56.179 } 00:07:56.179 [2024-11-29 12:54:27.602777] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:56.179 [2024-11-29 12:54:27.602898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:07:56.439 [2024-11-29 12:54:27.750502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.439 [2024-11-29 12:54:27.804463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.439 [2024-11-29 12:54:27.879539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.699  [2024-11-29T12:54:28.473Z] Copying: 12/36 [MB] (average 600 MBps) 00:07:56.958 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:56.958 00:07:56.958 real 0m0.763s 00:07:56.958 user 0m0.467s 00:07:56.958 sys 0m0.481s 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.958 ************************************ 00:07:56.958 END TEST dd_sparse_bdev_to_file 00:07:56.958 ************************************ 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:56.958 00:07:56.958 real 0m2.859s 00:07:56.958 user 0m1.669s 00:07:56.958 sys 0m1.729s 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.958 ************************************ 00:07:56.958 END TEST spdk_dd_sparse 00:07:56.958 ************************************ 00:07:56.958 12:54:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:56.958 12:54:28 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:56.958 12:54:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.958 12:54:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.958 12:54:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.958 ************************************ 00:07:56.958 START TEST spdk_dd_negative 00:07:56.958 ************************************ 00:07:56.958 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:57.218 * Looking for test storage... 00:07:57.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:57.218 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.219 --rc genhtml_branch_coverage=1 00:07:57.219 --rc genhtml_function_coverage=1 00:07:57.219 --rc genhtml_legend=1 00:07:57.219 --rc geninfo_all_blocks=1 00:07:57.219 --rc geninfo_unexecuted_blocks=1 00:07:57.219 00:07:57.219 ' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.219 --rc genhtml_branch_coverage=1 00:07:57.219 --rc genhtml_function_coverage=1 00:07:57.219 --rc genhtml_legend=1 00:07:57.219 --rc geninfo_all_blocks=1 00:07:57.219 --rc geninfo_unexecuted_blocks=1 00:07:57.219 00:07:57.219 ' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.219 --rc genhtml_branch_coverage=1 00:07:57.219 --rc genhtml_function_coverage=1 00:07:57.219 --rc genhtml_legend=1 00:07:57.219 --rc geninfo_all_blocks=1 00:07:57.219 --rc geninfo_unexecuted_blocks=1 00:07:57.219 00:07:57.219 ' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.219 --rc genhtml_branch_coverage=1 00:07:57.219 --rc genhtml_function_coverage=1 00:07:57.219 --rc genhtml_legend=1 00:07:57.219 --rc geninfo_all_blocks=1 00:07:57.219 --rc geninfo_unexecuted_blocks=1 00:07:57.219 00:07:57.219 ' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.219 ************************************ 00:07:57.219 START TEST dd_invalid_arguments 00:07:57.219 ************************************ 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.219 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:57.219 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:57.219 00:07:57.219 CPU options: 00:07:57.219 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:57.219 (like [0,1,10]) 00:07:57.219 --lcores lcore to CPU mapping list. The list is in the format: 00:07:57.219 [<,lcores[@CPUs]>...] 00:07:57.219 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:57.219 Within the group, '-' is used for range separator, 00:07:57.219 ',' is used for single number separator. 00:07:57.219 '( )' can be omitted for single element group, 00:07:57.219 '@' can be omitted if cpus and lcores have the same value 00:07:57.219 --disable-cpumask-locks Disable CPU core lock files. 00:07:57.220 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:57.220 pollers in the app support interrupt mode) 00:07:57.220 -p, --main-core main (primary) core for DPDK 00:07:57.220 00:07:57.220 Configuration options: 00:07:57.220 -c, --config, --json JSON config file 00:07:57.220 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:57.220 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:57.220 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:57.220 --rpcs-allowed comma-separated list of permitted RPCS 00:07:57.220 --json-ignore-init-errors don't exit on invalid config entry 00:07:57.220 00:07:57.220 Memory options: 00:07:57.220 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:57.220 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:57.220 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:57.220 -R, --huge-unlink unlink huge files after initialization 00:07:57.220 -n, --mem-channels number of memory channels used for DPDK 00:07:57.220 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:57.220 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:57.220 --no-huge run without using hugepages 00:07:57.220 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:57.220 -i, --shm-id shared memory ID (optional) 00:07:57.220 -g, --single-file-segments force creating just one hugetlbfs file 00:07:57.220 00:07:57.220 PCI options: 00:07:57.220 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:57.220 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:57.220 -u, --no-pci disable PCI access 00:07:57.220 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:57.220 00:07:57.220 Log options: 00:07:57.220 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:57.220 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:57.220 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:57.220 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:57.220 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:57.220 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:57.220 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:57.220 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:57.220 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:57.220 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:57.220 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:57.220 --silence-noticelog disable notice level logging to stderr 00:07:57.220 00:07:57.220 Trace options: 00:07:57.220 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:57.220 setting 0 to disable trace (default 32768) 00:07:57.220 Tracepoints vary in size and can use more than one trace entry. 00:07:57.220 -e, --tpoint-group [:] 00:07:57.220 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:57.220 [2024-11-29 12:54:28.698610] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:57.220 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:57.220 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:57.220 bdev_raid, scheduler, all). 00:07:57.220 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:57.220 a tracepoint group. First tpoint inside a group can be enabled by 00:07:57.220 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:57.220 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:57.220 in /include/spdk_internal/trace_defs.h 00:07:57.220 00:07:57.220 Other options: 00:07:57.220 -h, --help show this usage 00:07:57.220 -v, --version print SPDK version 00:07:57.220 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:57.220 --env-context Opaque context for use of the env implementation 00:07:57.220 00:07:57.220 Application specific: 00:07:57.220 [--------- DD Options ---------] 00:07:57.220 --if Input file. Must specify either --if or --ib. 00:07:57.220 --ib Input bdev. Must specifier either --if or --ib 00:07:57.220 --of Output file. Must specify either --of or --ob. 00:07:57.220 --ob Output bdev. Must specify either --of or --ob. 00:07:57.220 --iflag Input file flags. 00:07:57.220 --oflag Output file flags. 00:07:57.220 --bs I/O unit size (default: 4096) 00:07:57.220 --qd Queue depth (default: 2) 00:07:57.220 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:57.220 --skip Skip this many I/O units at start of input. (default: 0) 00:07:57.220 --seek Skip this many I/O units at start of output. (default: 0) 00:07:57.220 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:57.220 --sparse Enable hole skipping in input target 00:07:57.220 Available iflag and oflag values: 00:07:57.220 append - append mode 00:07:57.220 direct - use direct I/O for data 00:07:57.220 directory - fail unless a directory 00:07:57.220 dsync - use synchronized I/O for data 00:07:57.220 noatime - do not update access time 00:07:57.220 noctty - do not assign controlling terminal from file 00:07:57.220 nofollow - do not follow symlinks 00:07:57.220 nonblock - use non-blocking I/O 00:07:57.220 sync - use synchronized I/O for data and metadata 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.220 00:07:57.220 real 0m0.082s 00:07:57.220 user 0m0.050s 00:07:57.220 sys 0m0.031s 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.220 12:54:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:57.220 ************************************ 00:07:57.220 END TEST dd_invalid_arguments 00:07:57.220 ************************************ 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.480 ************************************ 00:07:57.480 START TEST dd_double_input 00:07:57.480 ************************************ 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:57.480 [2024-11-29 12:54:28.831861] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.480 00:07:57.480 real 0m0.079s 00:07:57.480 user 0m0.047s 00:07:57.480 sys 0m0.031s 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:57.480 ************************************ 00:07:57.480 END TEST dd_double_input 00:07:57.480 ************************************ 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.480 ************************************ 00:07:57.480 START TEST dd_double_output 00:07:57.480 ************************************ 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:57.480 [2024-11-29 12:54:28.961925] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.480 00:07:57.480 real 0m0.075s 00:07:57.480 user 0m0.048s 00:07:57.480 sys 0m0.026s 00:07:57.480 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.481 12:54:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:57.481 ************************************ 00:07:57.481 END TEST dd_double_output 00:07:57.481 ************************************ 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.739 ************************************ 00:07:57.739 START TEST dd_no_input 00:07:57.739 ************************************ 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:57.739 [2024-11-29 12:54:29.090287] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.739 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.739 00:07:57.739 real 0m0.079s 00:07:57.740 user 0m0.052s 00:07:57.740 sys 0m0.026s 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:57.740 ************************************ 00:07:57.740 END TEST dd_no_input 00:07:57.740 ************************************ 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.740 ************************************ 00:07:57.740 START TEST dd_no_output 00:07:57.740 ************************************ 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.740 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.740 [2024-11-29 12:54:29.234204] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.999 00:07:57.999 real 0m0.108s 00:07:57.999 user 0m0.070s 00:07:57.999 sys 0m0.037s 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:57.999 ************************************ 00:07:57.999 END TEST dd_no_output 00:07:57.999 ************************************ 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.999 ************************************ 00:07:57.999 START TEST dd_wrong_blocksize 00:07:57.999 ************************************ 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.999 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:58.000 [2024-11-29 12:54:29.384498] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.000 00:07:58.000 real 0m0.081s 00:07:58.000 user 0m0.048s 00:07:58.000 sys 0m0.032s 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:58.000 ************************************ 00:07:58.000 END TEST dd_wrong_blocksize 00:07:58.000 ************************************ 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.000 ************************************ 00:07:58.000 START TEST dd_smaller_blocksize 00:07:58.000 ************************************ 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.000 12:54:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:58.259 [2024-11-29 12:54:29.525583] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:07:58.259 [2024-11-29 12:54:29.525686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:07:58.259 [2024-11-29 12:54:29.678424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.259 [2024-11-29 12:54:29.765819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.518 [2024-11-29 12:54:29.848223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.777 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:59.036 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:59.036 [2024-11-29 12:54:30.549244] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:59.036 [2024-11-29 12:54:30.549333] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.296 [2024-11-29 12:54:30.741996] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.556 00:07:59.556 real 0m1.364s 00:07:59.556 user 0m0.521s 00:07:59.556 sys 0m0.733s 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 END TEST dd_smaller_blocksize 00:07:59.556 ************************************ 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 START TEST dd_invalid_count 00:07:59.556 ************************************ 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.556 [2024-11-29 12:54:30.954726] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.556 00:07:59.556 real 0m0.084s 00:07:59.556 user 0m0.053s 00:07:59.556 sys 0m0.030s 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.556 12:54:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 END TEST dd_invalid_count 00:07:59.556 ************************************ 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 START TEST dd_invalid_oflag 00:07:59.556 ************************************ 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.556 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:59.816 [2024-11-29 12:54:31.088275] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.816 00:07:59.816 real 0m0.069s 00:07:59.816 user 0m0.039s 00:07:59.816 sys 0m0.029s 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:59.816 ************************************ 00:07:59.816 END TEST dd_invalid_oflag 00:07:59.816 ************************************ 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.816 ************************************ 00:07:59.816 START TEST dd_invalid_iflag 00:07:59.816 ************************************ 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:59.816 [2024-11-29 12:54:31.215532] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.816 00:07:59.816 real 0m0.069s 00:07:59.816 user 0m0.037s 00:07:59.816 sys 0m0.032s 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:59.816 ************************************ 00:07:59.816 END TEST dd_invalid_iflag 00:07:59.816 ************************************ 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.816 ************************************ 00:07:59.816 START TEST dd_unknown_flag 00:07:59.816 ************************************ 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.816 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:00.076 [2024-11-29 12:54:31.342479] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:00.076 [2024-11-29 12:54:31.342589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61913 ] 00:08:00.076 [2024-11-29 12:54:31.490034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.076 [2024-11-29 12:54:31.550615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.335 [2024-11-29 12:54:31.624691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.335 [2024-11-29 12:54:31.673909] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:00.335 [2024-11-29 12:54:31.674029] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.335 [2024-11-29 12:54:31.674092] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:00.335 [2024-11-29 12:54:31.674106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.335 [2024-11-29 12:54:31.674400] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:00.335 [2024-11-29 12:54:31.674415] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.335 [2024-11-29 12:54:31.674480] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:00.335 [2024-11-29 12:54:31.674490] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:00.335 [2024-11-29 12:54:31.845291] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.594 00:08:00.594 real 0m0.652s 00:08:00.594 user 0m0.371s 00:08:00.594 sys 0m0.184s 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:00.594 ************************************ 00:08:00.594 END TEST dd_unknown_flag 00:08:00.594 ************************************ 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:00.594 ************************************ 00:08:00.594 START TEST dd_invalid_json 00:08:00.594 ************************************ 00:08:00.594 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.595 12:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.595 [2024-11-29 12:54:32.062310] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:00.595 [2024-11-29 12:54:32.062430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:08:00.854 [2024-11-29 12:54:32.204811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.854 [2024-11-29 12:54:32.261051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.854 [2024-11-29 12:54:32.261162] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:00.854 [2024-11-29 12:54:32.261180] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:00.854 [2024-11-29 12:54:32.261190] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.854 [2024-11-29 12:54:32.261228] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.854 00:08:00.854 real 0m0.330s 00:08:00.854 user 0m0.153s 00:08:00.854 sys 0m0.074s 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:00.854 ************************************ 00:08:00.854 END TEST dd_invalid_json 00:08:00.854 ************************************ 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.854 12:54:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.113 ************************************ 00:08:01.113 START TEST dd_invalid_seek 00:08:01.113 ************************************ 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.113 12:54:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:01.113 { 00:08:01.113 "subsystems": [ 00:08:01.113 { 00:08:01.113 "subsystem": "bdev", 00:08:01.113 "config": [ 00:08:01.113 { 00:08:01.113 "params": { 00:08:01.113 "block_size": 512, 00:08:01.113 "num_blocks": 512, 00:08:01.113 "name": "malloc0" 00:08:01.113 }, 00:08:01.113 "method": "bdev_malloc_create" 00:08:01.113 }, 00:08:01.113 { 00:08:01.113 "params": { 00:08:01.113 "block_size": 512, 00:08:01.113 "num_blocks": 512, 00:08:01.113 "name": "malloc1" 00:08:01.113 }, 00:08:01.113 "method": "bdev_malloc_create" 00:08:01.113 }, 00:08:01.113 { 00:08:01.113 "method": "bdev_wait_for_examine" 00:08:01.113 } 00:08:01.113 ] 00:08:01.113 } 00:08:01.113 ] 00:08:01.113 } 00:08:01.113 [2024-11-29 12:54:32.446562] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:01.113 [2024-11-29 12:54:32.446671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61971 ] 00:08:01.113 [2024-11-29 12:54:32.597610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.371 [2024-11-29 12:54:32.661470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.371 [2024-11-29 12:54:32.746234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.371 [2024-11-29 12:54:32.829676] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:01.371 [2024-11-29 12:54:32.829776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.631 [2024-11-29 12:54:33.025297] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.631 00:08:01.631 real 0m0.753s 00:08:01.631 user 0m0.484s 00:08:01.631 sys 0m0.226s 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.631 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:01.631 ************************************ 00:08:01.631 END TEST dd_invalid_seek 00:08:01.631 ************************************ 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.890 ************************************ 00:08:01.890 START TEST dd_invalid_skip 00:08:01.890 ************************************ 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:01.890 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.891 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:01.891 { 00:08:01.891 "subsystems": [ 00:08:01.891 { 00:08:01.891 "subsystem": "bdev", 00:08:01.891 "config": [ 00:08:01.891 { 00:08:01.891 "params": { 00:08:01.891 "block_size": 512, 00:08:01.891 "num_blocks": 512, 00:08:01.891 "name": "malloc0" 00:08:01.891 }, 00:08:01.891 "method": "bdev_malloc_create" 00:08:01.891 }, 00:08:01.891 { 00:08:01.891 "params": { 00:08:01.891 "block_size": 512, 00:08:01.891 "num_blocks": 512, 00:08:01.891 "name": "malloc1" 00:08:01.891 }, 00:08:01.891 "method": "bdev_malloc_create" 00:08:01.891 }, 00:08:01.891 { 00:08:01.891 "method": "bdev_wait_for_examine" 00:08:01.891 } 00:08:01.891 ] 00:08:01.891 } 00:08:01.891 ] 00:08:01.891 } 00:08:01.891 [2024-11-29 12:54:33.269924] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:01.891 [2024-11-29 12:54:33.270260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:08:02.150 [2024-11-29 12:54:33.423268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.150 [2024-11-29 12:54:33.497390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.150 [2024-11-29 12:54:33.575391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.150 [2024-11-29 12:54:33.651064] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:02.150 [2024-11-29 12:54:33.651146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.410 [2024-11-29 12:54:33.822812] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.410 00:08:02.410 real 0m0.713s 00:08:02.410 user 0m0.456s 00:08:02.410 sys 0m0.211s 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.410 ************************************ 00:08:02.410 END TEST dd_invalid_skip 00:08:02.410 ************************************ 00:08:02.410 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:02.669 12:54:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:02.669 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 ************************************ 00:08:02.670 START TEST dd_invalid_input_count 00:08:02.670 ************************************ 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.670 12:54:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:02.670 { 00:08:02.670 "subsystems": [ 00:08:02.670 { 00:08:02.670 "subsystem": "bdev", 00:08:02.670 "config": [ 00:08:02.670 { 00:08:02.670 "params": { 00:08:02.670 "block_size": 512, 00:08:02.670 "num_blocks": 512, 00:08:02.670 "name": "malloc0" 00:08:02.670 }, 00:08:02.670 "method": "bdev_malloc_create" 00:08:02.670 }, 00:08:02.670 { 00:08:02.670 "params": { 00:08:02.670 "block_size": 512, 00:08:02.670 "num_blocks": 512, 00:08:02.670 "name": "malloc1" 00:08:02.670 }, 00:08:02.670 "method": "bdev_malloc_create" 00:08:02.670 }, 00:08:02.670 { 00:08:02.670 "method": "bdev_wait_for_examine" 00:08:02.670 } 00:08:02.670 ] 00:08:02.670 } 00:08:02.670 ] 00:08:02.670 } 00:08:02.670 [2024-11-29 12:54:34.017106] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:02.670 [2024-11-29 12:54:34.017250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:08:02.670 [2024-11-29 12:54:34.160904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.929 [2024-11-29 12:54:34.222115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.929 [2024-11-29 12:54:34.306629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.929 [2024-11-29 12:54:34.385214] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:02.929 [2024-11-29 12:54:34.385344] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.189 [2024-11-29 12:54:34.555680] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.189 00:08:03.189 real 0m0.675s 00:08:03.189 user 0m0.420s 00:08:03.189 sys 0m0.211s 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.189 ************************************ 00:08:03.189 END TEST dd_invalid_input_count 00:08:03.189 ************************************ 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.189 ************************************ 00:08:03.189 START TEST dd_invalid_output_count 00:08:03.189 ************************************ 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.189 12:54:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:03.448 { 00:08:03.448 "subsystems": [ 00:08:03.448 { 00:08:03.448 "subsystem": "bdev", 00:08:03.448 "config": [ 00:08:03.448 { 00:08:03.448 "params": { 00:08:03.448 "block_size": 512, 00:08:03.448 "num_blocks": 512, 00:08:03.448 "name": "malloc0" 00:08:03.448 }, 00:08:03.448 "method": "bdev_malloc_create" 00:08:03.448 }, 00:08:03.448 { 00:08:03.448 "method": "bdev_wait_for_examine" 00:08:03.448 } 00:08:03.448 ] 00:08:03.448 } 00:08:03.448 ] 00:08:03.448 } 00:08:03.448 [2024-11-29 12:54:34.753709] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:03.448 [2024-11-29 12:54:34.753810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62077 ] 00:08:03.448 [2024-11-29 12:54:34.905121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.712 [2024-11-29 12:54:34.980046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.712 [2024-11-29 12:54:35.043264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.712 [2024-11-29 12:54:35.109712] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:03.712 [2024-11-29 12:54:35.109787] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.975 [2024-11-29 12:54:35.249511] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.975 00:08:03.975 real 0m0.638s 00:08:03.975 user 0m0.402s 00:08:03.975 sys 0m0.187s 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:03.975 ************************************ 00:08:03.975 END TEST dd_invalid_output_count 00:08:03.975 ************************************ 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.975 ************************************ 00:08:03.975 START TEST dd_bs_not_multiple 00:08:03.975 ************************************ 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:03.975 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.976 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:03.976 { 00:08:03.976 "subsystems": [ 00:08:03.976 { 00:08:03.976 "subsystem": "bdev", 00:08:03.976 "config": [ 00:08:03.976 { 00:08:03.976 "params": { 00:08:03.976 "block_size": 512, 00:08:03.976 "num_blocks": 512, 00:08:03.976 "name": "malloc0" 00:08:03.976 }, 00:08:03.976 "method": "bdev_malloc_create" 00:08:03.976 }, 00:08:03.976 { 00:08:03.976 "params": { 00:08:03.976 "block_size": 512, 00:08:03.976 "num_blocks": 512, 00:08:03.976 "name": "malloc1" 00:08:03.976 }, 00:08:03.976 "method": "bdev_malloc_create" 00:08:03.976 }, 00:08:03.976 { 00:08:03.976 "method": "bdev_wait_for_examine" 00:08:03.976 } 00:08:03.976 ] 00:08:03.976 } 00:08:03.976 ] 00:08:03.976 } 00:08:03.976 [2024-11-29 12:54:35.443184] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:03.976 [2024-11-29 12:54:35.443293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62114 ] 00:08:04.234 [2024-11-29 12:54:35.589880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.234 [2024-11-29 12:54:35.651082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.234 [2024-11-29 12:54:35.714126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.493 [2024-11-29 12:54:35.785351] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:04.493 [2024-11-29 12:54:35.785411] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.493 [2024-11-29 12:54:35.922746] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:04.493 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.493 00:08:04.493 real 0m0.623s 00:08:04.493 user 0m0.399s 00:08:04.493 sys 0m0.179s 00:08:04.494 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.494 12:54:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:04.494 ************************************ 00:08:04.494 END TEST dd_bs_not_multiple 00:08:04.494 ************************************ 00:08:04.753 ************************************ 00:08:04.753 END TEST spdk_dd_negative 00:08:04.753 ************************************ 00:08:04.753 00:08:04.753 real 0m7.618s 00:08:04.753 user 0m4.051s 00:08:04.753 sys 0m2.932s 00:08:04.753 12:54:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.753 12:54:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.753 ************************************ 00:08:04.753 END TEST spdk_dd 00:08:04.753 ************************************ 00:08:04.753 00:08:04.753 real 1m23.524s 00:08:04.753 user 0m52.895s 00:08:04.753 sys 0m38.640s 00:08:04.753 12:54:36 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.753 12:54:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:04.753 12:54:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:04.753 12:54:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.753 12:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:04.753 12:54:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:04.753 12:54:36 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:04.753 12:54:36 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:04.753 12:54:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.753 12:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.753 12:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:04.753 ************************************ 00:08:04.753 START TEST nvmf_tcp 00:08:04.753 ************************************ 00:08:04.753 12:54:36 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:04.753 * Looking for test storage... 00:08:05.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.013 12:54:36 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.013 --rc genhtml_branch_coverage=1 00:08:05.013 --rc genhtml_function_coverage=1 00:08:05.013 --rc genhtml_legend=1 00:08:05.013 --rc geninfo_all_blocks=1 00:08:05.013 --rc geninfo_unexecuted_blocks=1 00:08:05.013 00:08:05.013 ' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.013 --rc genhtml_branch_coverage=1 00:08:05.013 --rc genhtml_function_coverage=1 00:08:05.013 --rc genhtml_legend=1 00:08:05.013 --rc geninfo_all_blocks=1 00:08:05.013 --rc geninfo_unexecuted_blocks=1 00:08:05.013 00:08:05.013 ' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.013 --rc genhtml_branch_coverage=1 00:08:05.013 --rc genhtml_function_coverage=1 00:08:05.013 --rc genhtml_legend=1 00:08:05.013 --rc geninfo_all_blocks=1 00:08:05.013 --rc geninfo_unexecuted_blocks=1 00:08:05.013 00:08:05.013 ' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.013 --rc genhtml_branch_coverage=1 00:08:05.013 --rc genhtml_function_coverage=1 00:08:05.013 --rc genhtml_legend=1 00:08:05.013 --rc geninfo_all_blocks=1 00:08:05.013 --rc geninfo_unexecuted_blocks=1 00:08:05.013 00:08:05.013 ' 00:08:05.013 12:54:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:05.013 12:54:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:05.013 12:54:36 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.013 12:54:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.013 ************************************ 00:08:05.013 START TEST nvmf_target_core 00:08:05.013 ************************************ 00:08:05.013 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:05.013 * Looking for test storage... 00:08:05.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:05.013 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.013 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.013 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.276 --rc genhtml_branch_coverage=1 00:08:05.276 --rc genhtml_function_coverage=1 00:08:05.276 --rc genhtml_legend=1 00:08:05.276 --rc geninfo_all_blocks=1 00:08:05.276 --rc geninfo_unexecuted_blocks=1 00:08:05.276 00:08:05.276 ' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.276 --rc genhtml_branch_coverage=1 00:08:05.276 --rc genhtml_function_coverage=1 00:08:05.276 --rc genhtml_legend=1 00:08:05.276 --rc geninfo_all_blocks=1 00:08:05.276 --rc geninfo_unexecuted_blocks=1 00:08:05.276 00:08:05.276 ' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.276 --rc genhtml_branch_coverage=1 00:08:05.276 --rc genhtml_function_coverage=1 00:08:05.276 --rc genhtml_legend=1 00:08:05.276 --rc geninfo_all_blocks=1 00:08:05.276 --rc geninfo_unexecuted_blocks=1 00:08:05.276 00:08:05.276 ' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.276 --rc genhtml_branch_coverage=1 00:08:05.276 --rc genhtml_function_coverage=1 00:08:05.276 --rc genhtml_legend=1 00:08:05.276 --rc geninfo_all_blocks=1 00:08:05.276 --rc geninfo_unexecuted_blocks=1 00:08:05.276 00:08:05.276 ' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.276 ************************************ 00:08:05.276 START TEST nvmf_host_management 00:08:05.276 ************************************ 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:05.276 * Looking for test storage... 00:08:05.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.276 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.537 --rc genhtml_branch_coverage=1 00:08:05.537 --rc genhtml_function_coverage=1 00:08:05.537 --rc genhtml_legend=1 00:08:05.537 --rc geninfo_all_blocks=1 00:08:05.537 --rc geninfo_unexecuted_blocks=1 00:08:05.537 00:08:05.537 ' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.537 --rc genhtml_branch_coverage=1 00:08:05.537 --rc genhtml_function_coverage=1 00:08:05.537 --rc genhtml_legend=1 00:08:05.537 --rc geninfo_all_blocks=1 00:08:05.537 --rc geninfo_unexecuted_blocks=1 00:08:05.537 00:08:05.537 ' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.537 --rc genhtml_branch_coverage=1 00:08:05.537 --rc genhtml_function_coverage=1 00:08:05.537 --rc genhtml_legend=1 00:08:05.537 --rc geninfo_all_blocks=1 00:08:05.537 --rc geninfo_unexecuted_blocks=1 00:08:05.537 00:08:05.537 ' 00:08:05.537 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.537 --rc genhtml_branch_coverage=1 00:08:05.537 --rc genhtml_function_coverage=1 00:08:05.537 --rc genhtml_legend=1 00:08:05.537 --rc geninfo_all_blocks=1 00:08:05.537 --rc geninfo_unexecuted_blocks=1 00:08:05.537 00:08:05.537 ' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:05.538 Cannot find device "nvmf_init_br" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:05.538 Cannot find device "nvmf_init_br2" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:05.538 Cannot find device "nvmf_tgt_br" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.538 Cannot find device "nvmf_tgt_br2" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:05.538 Cannot find device "nvmf_init_br" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:05.538 Cannot find device "nvmf_init_br2" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:05.538 Cannot find device "nvmf_tgt_br" 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:05.538 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:05.538 Cannot find device "nvmf_tgt_br2" 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:05.539 Cannot find device "nvmf_br" 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:05.539 Cannot find device "nvmf_init_if" 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:05.539 Cannot find device "nvmf_init_if2" 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:05.539 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.539 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.539 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:05.539 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.539 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:05.798 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:06.057 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.057 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:06.057 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:06.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:08:06.057 00:08:06.057 --- 10.0.0.3 ping statistics --- 00:08:06.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.057 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:06.057 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:06.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:06.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:08:06.057 00:08:06.057 --- 10.0.0.4 ping statistics --- 00:08:06.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.057 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:06.057 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:06.057 00:08:06.057 --- 10.0.0.1 ping statistics --- 00:08:06.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.058 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:06.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:06.058 00:08:06.058 --- 10.0.0.2 ping statistics --- 00:08:06.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.058 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62453 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62453 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62453 ']' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.058 12:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.058 [2024-11-29 12:54:37.437912] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:06.058 [2024-11-29 12:54:37.438189] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.317 [2024-11-29 12:54:37.594703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.317 [2024-11-29 12:54:37.674395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.317 [2024-11-29 12:54:37.674716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.317 [2024-11-29 12:54:37.674908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.317 [2024-11-29 12:54:37.675095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.317 [2024-11-29 12:54:37.675111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.317 [2024-11-29 12:54:37.676458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.317 [2024-11-29 12:54:37.676618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.317 [2024-11-29 12:54:37.676748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.317 [2024-11-29 12:54:37.676752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.317 [2024-11-29 12:54:37.738461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 [2024-11-29 12:54:38.549302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 Malloc0 00:08:07.256 [2024-11-29 12:54:38.634702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62512 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62512 /var/tmp/bdevperf.sock 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62512 ']' 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.256 { 00:08:07.256 "params": { 00:08:07.256 "name": "Nvme$subsystem", 00:08:07.256 "trtype": "$TEST_TRANSPORT", 00:08:07.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.256 "adrfam": "ipv4", 00:08:07.256 "trsvcid": "$NVMF_PORT", 00:08:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.256 "hdgst": ${hdgst:-false}, 00:08:07.256 "ddgst": ${ddgst:-false} 00:08:07.256 }, 00:08:07.256 "method": "bdev_nvme_attach_controller" 00:08:07.256 } 00:08:07.256 EOF 00:08:07.256 )") 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:07.256 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.256 "params": { 00:08:07.256 "name": "Nvme0", 00:08:07.256 "trtype": "tcp", 00:08:07.256 "traddr": "10.0.0.3", 00:08:07.256 "adrfam": "ipv4", 00:08:07.256 "trsvcid": "4420", 00:08:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:07.256 "hdgst": false, 00:08:07.256 "ddgst": false 00:08:07.256 }, 00:08:07.256 "method": "bdev_nvme_attach_controller" 00:08:07.256 }' 00:08:07.256 [2024-11-29 12:54:38.749417] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:07.256 [2024-11-29 12:54:38.749521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62512 ] 00:08:07.515 [2024-11-29 12:54:38.902585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.515 [2024-11-29 12:54:38.978784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.774 [2024-11-29 12:54:39.050342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.774 Running I/O for 10 seconds... 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.774 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.033 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:08.033 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:08.033 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.305 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.305 [2024-11-29 12:54:39.634148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.305 [2024-11-29 12:54:39.634248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.305 [2024-11-29 12:54:39.634273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.305 [2024-11-29 12:54:39.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.305 [2024-11-29 12:54:39.634319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.305 [2024-11-29 12:54:39.634341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.305 [2024-11-29 12:54:39.634351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.634979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.634989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.306 [2024-11-29 12:54:39.635807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.306 [2024-11-29 12:54:39.635818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e462d0 is same with the state(6) to be set 00:08:08.306 [2024-11-29 12:54:39.637060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.306 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:08.306 00:08:08.306 Latency(us) 00:08:08.306 [2024-11-29T12:54:39.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.306 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:08.306 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:08.306 Verification LBA range: start 0x0 length 0x400 00:08:08.306 Nvme0n1 : 0.45 1266.59 79.16 140.73 0.00 43898.30 4617.31 44802.79 00:08:08.306 [2024-11-29T12:54:39.821Z] =================================================================================================================== 00:08:08.306 [2024-11-29T12:54:39.821Z] Total : 1266.59 79.16 140.73 0.00 43898.30 4617.31 44802.79 00:08:08.306 [2024-11-29 12:54:39.639598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.306 [2024-11-29 12:54:39.639625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4bce0 (9): Bad file descriptor 00:08:08.306 [2024-11-29 12:54:39.647762] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.306 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62512 00:08:09.247 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62512) - No such process 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.247 { 00:08:09.247 "params": { 00:08:09.247 "name": "Nvme$subsystem", 00:08:09.247 "trtype": "$TEST_TRANSPORT", 00:08:09.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.247 "adrfam": "ipv4", 00:08:09.247 "trsvcid": "$NVMF_PORT", 00:08:09.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.247 "hdgst": ${hdgst:-false}, 00:08:09.247 "ddgst": ${ddgst:-false} 00:08:09.247 }, 00:08:09.247 "method": "bdev_nvme_attach_controller" 00:08:09.247 } 00:08:09.247 EOF 00:08:09.247 )") 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:09.247 12:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.247 "params": { 00:08:09.247 "name": "Nvme0", 00:08:09.247 "trtype": "tcp", 00:08:09.247 "traddr": "10.0.0.3", 00:08:09.247 "adrfam": "ipv4", 00:08:09.247 "trsvcid": "4420", 00:08:09.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:09.247 "hdgst": false, 00:08:09.247 "ddgst": false 00:08:09.247 }, 00:08:09.247 "method": "bdev_nvme_attach_controller" 00:08:09.247 }' 00:08:09.247 [2024-11-29 12:54:40.734330] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:09.247 [2024-11-29 12:54:40.734718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62551 ] 00:08:09.505 [2024-11-29 12:54:40.893003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.505 [2024-11-29 12:54:40.957334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.763 [2024-11-29 12:54:41.025964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.763 Running I/O for 1 seconds... 00:08:10.699 1344.00 IOPS, 84.00 MiB/s 00:08:10.699 Latency(us) 00:08:10.699 [2024-11-29T12:54:42.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:10.699 Verification LBA range: start 0x0 length 0x400 00:08:10.699 Nvme0n1 : 1.03 1362.78 85.17 0.00 0.00 46021.77 4796.04 45041.11 00:08:10.699 [2024-11-29T12:54:42.214Z] =================================================================================================================== 00:08:10.699 [2024-11-29T12:54:42.214Z] Total : 1362.78 85.17 0.00 0.00 46021.77 4796.04 45041.11 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.957 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.216 rmmod nvme_tcp 00:08:11.216 rmmod nvme_fabrics 00:08:11.216 rmmod nvme_keyring 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62453 ']' 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62453 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62453 ']' 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62453 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:11.216 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62453 00:08:11.217 killing process with pid 62453 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62453' 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62453 00:08:11.217 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62453 00:08:11.476 [2024-11-29 12:54:42.812070] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:11.476 12:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:11.735 00:08:11.735 real 0m6.482s 00:08:11.735 user 0m23.367s 00:08:11.735 sys 0m1.726s 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.735 ************************************ 00:08:11.735 END TEST nvmf_host_management 00:08:11.735 ************************************ 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.735 ************************************ 00:08:11.735 START TEST nvmf_lvol 00:08:11.735 ************************************ 00:08:11.735 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:11.735 * Looking for test storage... 00:08:11.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.995 --rc genhtml_branch_coverage=1 00:08:11.995 --rc genhtml_function_coverage=1 00:08:11.995 --rc genhtml_legend=1 00:08:11.995 --rc geninfo_all_blocks=1 00:08:11.995 --rc geninfo_unexecuted_blocks=1 00:08:11.995 00:08:11.995 ' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.995 --rc genhtml_branch_coverage=1 00:08:11.995 --rc genhtml_function_coverage=1 00:08:11.995 --rc genhtml_legend=1 00:08:11.995 --rc geninfo_all_blocks=1 00:08:11.995 --rc geninfo_unexecuted_blocks=1 00:08:11.995 00:08:11.995 ' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.995 --rc genhtml_branch_coverage=1 00:08:11.995 --rc genhtml_function_coverage=1 00:08:11.995 --rc genhtml_legend=1 00:08:11.995 --rc geninfo_all_blocks=1 00:08:11.995 --rc geninfo_unexecuted_blocks=1 00:08:11.995 00:08:11.995 ' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.995 --rc genhtml_branch_coverage=1 00:08:11.995 --rc genhtml_function_coverage=1 00:08:11.995 --rc genhtml_legend=1 00:08:11.995 --rc geninfo_all_blocks=1 00:08:11.995 --rc geninfo_unexecuted_blocks=1 00:08:11.995 00:08:11.995 ' 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:11.995 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:11.996 Cannot find device "nvmf_init_br" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:11.996 Cannot find device "nvmf_init_br2" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:11.996 Cannot find device "nvmf_tgt_br" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.996 Cannot find device "nvmf_tgt_br2" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:11.996 Cannot find device "nvmf_init_br" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:11.996 Cannot find device "nvmf_init_br2" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:11.996 Cannot find device "nvmf_tgt_br" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:11.996 Cannot find device "nvmf_tgt_br2" 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:11.996 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:12.256 Cannot find device "nvmf_br" 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:12.256 Cannot find device "nvmf_init_if" 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:12.256 Cannot find device "nvmf_init_if2" 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:12.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:08:12.256 00:08:12.256 --- 10.0.0.3 ping statistics --- 00:08:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.256 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:12.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:12.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:08:12.256 00:08:12.256 --- 10.0.0.4 ping statistics --- 00:08:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.256 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:08:12.256 00:08:12.256 --- 10.0.0.1 ping statistics --- 00:08:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.256 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:12.256 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:12.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:12.516 00:08:12.516 --- 10.0.0.2 ping statistics --- 00:08:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.516 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62818 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62818 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62818 ']' 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.516 12:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 [2024-11-29 12:54:43.877739] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:12.516 [2024-11-29 12:54:43.877856] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.775 [2024-11-29 12:54:44.038037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.775 [2024-11-29 12:54:44.102139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.775 [2024-11-29 12:54:44.102415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.775 [2024-11-29 12:54:44.102644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.775 [2024-11-29 12:54:44.102892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.775 [2024-11-29 12:54:44.103065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.775 [2024-11-29 12:54:44.104460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.775 [2024-11-29 12:54:44.104932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.775 [2024-11-29 12:54:44.104943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.775 [2024-11-29 12:54:44.167688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.712 12:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.973 [2024-11-29 12:54:45.283060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.974 12:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.250 12:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:14.250 12:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.523 12:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:14.523 12:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:14.783 12:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:15.351 12:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=97848b81-bc89-4e9d-9264-c457366b8382 00:08:15.351 12:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 97848b81-bc89-4e9d-9264-c457366b8382 lvol 20 00:08:15.611 12:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=32897535-91fa-4a66-af47-12c8e35e060b 00:08:15.611 12:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.871 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32897535-91fa-4a66-af47-12c8e35e060b 00:08:16.130 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:16.389 [2024-11-29 12:54:47.668143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:16.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:16.648 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:16.648 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62899 00:08:16.648 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:17.585 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 32897535-91fa-4a66-af47-12c8e35e060b MY_SNAPSHOT 00:08:17.844 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=53f7c65b-dd70-423a-b6d4-4b725be759ca 00:08:17.844 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 32897535-91fa-4a66-af47-12c8e35e060b 30 00:08:18.413 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 53f7c65b-dd70-423a-b6d4-4b725be759ca MY_CLONE 00:08:18.413 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f12d5b41-fcb2-43a0-b28d-b11308a9b211 00:08:18.413 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f12d5b41-fcb2-43a0-b28d-b11308a9b211 00:08:18.980 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62899 00:08:27.138 Initializing NVMe Controllers 00:08:27.138 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:27.138 Controller IO queue size 128, less than required. 00:08:27.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.138 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:27.138 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:27.138 Initialization complete. Launching workers. 00:08:27.138 ======================================================== 00:08:27.138 Latency(us) 00:08:27.138 Device Information : IOPS MiB/s Average min max 00:08:27.138 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8869.58 34.65 14432.98 3420.11 63381.33 00:08:27.138 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8927.98 34.87 14337.49 1843.23 66798.30 00:08:27.138 ======================================================== 00:08:27.138 Total : 17797.55 69.52 14385.08 1843.23 66798.30 00:08:27.138 00:08:27.138 12:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.398 12:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 32897535-91fa-4a66-af47-12c8e35e060b 00:08:27.658 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97848b81-bc89-4e9d-9264-c457366b8382 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.918 rmmod nvme_tcp 00:08:27.918 rmmod nvme_fabrics 00:08:27.918 rmmod nvme_keyring 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62818 ']' 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62818 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62818 ']' 00:08:27.918 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62818 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62818 00:08:28.177 killing process with pid 62818 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62818' 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62818 00:08:28.177 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62818 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.437 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.696 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:28.696 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.696 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.696 12:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:28.696 00:08:28.696 real 0m16.841s 00:08:28.696 user 1m8.877s 00:08:28.696 sys 0m4.007s 00:08:28.696 ************************************ 00:08:28.696 END TEST nvmf_lvol 00:08:28.696 ************************************ 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.696 ************************************ 00:08:28.696 START TEST nvmf_lvs_grow 00:08:28.696 ************************************ 00:08:28.696 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:28.696 * Looking for test storage... 00:08:28.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:28.697 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:28.697 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:28.697 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.958 --rc genhtml_branch_coverage=1 00:08:28.958 --rc genhtml_function_coverage=1 00:08:28.958 --rc genhtml_legend=1 00:08:28.958 --rc geninfo_all_blocks=1 00:08:28.958 --rc geninfo_unexecuted_blocks=1 00:08:28.958 00:08:28.958 ' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.958 --rc genhtml_branch_coverage=1 00:08:28.958 --rc genhtml_function_coverage=1 00:08:28.958 --rc genhtml_legend=1 00:08:28.958 --rc geninfo_all_blocks=1 00:08:28.958 --rc geninfo_unexecuted_blocks=1 00:08:28.958 00:08:28.958 ' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.958 --rc genhtml_branch_coverage=1 00:08:28.958 --rc genhtml_function_coverage=1 00:08:28.958 --rc genhtml_legend=1 00:08:28.958 --rc geninfo_all_blocks=1 00:08:28.958 --rc geninfo_unexecuted_blocks=1 00:08:28.958 00:08:28.958 ' 00:08:28.958 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.958 --rc genhtml_branch_coverage=1 00:08:28.958 --rc genhtml_function_coverage=1 00:08:28.958 --rc genhtml_legend=1 00:08:28.958 --rc geninfo_all_blocks=1 00:08:28.958 --rc geninfo_unexecuted_blocks=1 00:08:28.958 00:08:28.958 ' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.959 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:28.959 Cannot find device "nvmf_init_br" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:28.959 Cannot find device "nvmf_init_br2" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:28.959 Cannot find device "nvmf_tgt_br" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.959 Cannot find device "nvmf_tgt_br2" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:28.959 Cannot find device "nvmf_init_br" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:28.959 Cannot find device "nvmf_init_br2" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:28.959 Cannot find device "nvmf_tgt_br" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:28.959 Cannot find device "nvmf_tgt_br2" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:28.959 Cannot find device "nvmf_br" 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:28.959 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:28.959 Cannot find device "nvmf_init_if" 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:28.960 Cannot find device "nvmf_init_if2" 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:28.960 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:29.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:08:29.219 00:08:29.219 --- 10.0.0.3 ping statistics --- 00:08:29.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.219 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:29.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:29.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:08:29.219 00:08:29.219 --- 10.0.0.4 ping statistics --- 00:08:29.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.219 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:29.219 00:08:29.219 --- 10.0.0.1 ping statistics --- 00:08:29.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.219 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:29.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:29.219 00:08:29.219 --- 10.0.0.2 ping statistics --- 00:08:29.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.219 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:29.219 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.220 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.220 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63279 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63279 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63279 ']' 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.478 12:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.478 [2024-11-29 12:55:00.783766] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:29.478 [2024-11-29 12:55:00.784058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.478 [2024-11-29 12:55:00.931076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.737 [2024-11-29 12:55:00.998223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.737 [2024-11-29 12:55:00.998543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.737 [2024-11-29 12:55:00.998578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.737 [2024-11-29 12:55:00.998589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.737 [2024-11-29 12:55:00.998598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.737 [2024-11-29 12:55:00.999096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.737 [2024-11-29 12:55:01.057002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.675 12:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:30.675 [2024-11-29 12:55:02.158889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.675 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:30.675 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.675 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.675 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 ************************************ 00:08:30.935 START TEST lvs_grow_clean 00:08:30.935 ************************************ 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.935 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.194 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.194 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.454 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:31.454 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:31.454 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.713 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.713 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.713 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac lvol 150 00:08:31.973 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6680c0f6-5eab-4692-903e-797fe1b40915 00:08:31.973 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.973 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.232 [2024-11-29 12:55:03.607801] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.232 [2024-11-29 12:55:03.607945] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.232 true 00:08:32.232 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.232 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:32.492 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.492 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.752 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6680c0f6-5eab-4692-903e-797fe1b40915 00:08:33.011 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:33.269 [2024-11-29 12:55:04.761990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:33.529 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63373 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63373 /var/tmp/bdevperf.sock 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63373 ']' 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.787 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.787 [2024-11-29 12:55:05.102624] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:33.787 [2024-11-29 12:55:05.102907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:08:33.787 [2024-11-29 12:55:05.253142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.045 [2024-11-29 12:55:05.321253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.045 [2024-11-29 12:55:05.382891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.045 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.045 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:34.045 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.611 Nvme0n1 00:08:34.611 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.870 [ 00:08:34.870 { 00:08:34.870 "name": "Nvme0n1", 00:08:34.870 "aliases": [ 00:08:34.870 "6680c0f6-5eab-4692-903e-797fe1b40915" 00:08:34.870 ], 00:08:34.870 "product_name": "NVMe disk", 00:08:34.870 "block_size": 4096, 00:08:34.870 "num_blocks": 38912, 00:08:34.870 "uuid": "6680c0f6-5eab-4692-903e-797fe1b40915", 00:08:34.870 "numa_id": -1, 00:08:34.870 "assigned_rate_limits": { 00:08:34.870 "rw_ios_per_sec": 0, 00:08:34.870 "rw_mbytes_per_sec": 0, 00:08:34.870 "r_mbytes_per_sec": 0, 00:08:34.870 "w_mbytes_per_sec": 0 00:08:34.870 }, 00:08:34.870 "claimed": false, 00:08:34.870 "zoned": false, 00:08:34.870 "supported_io_types": { 00:08:34.870 "read": true, 00:08:34.870 "write": true, 00:08:34.870 "unmap": true, 00:08:34.870 "flush": true, 00:08:34.870 "reset": true, 00:08:34.870 "nvme_admin": true, 00:08:34.870 "nvme_io": true, 00:08:34.870 "nvme_io_md": false, 00:08:34.870 "write_zeroes": true, 00:08:34.870 "zcopy": false, 00:08:34.870 "get_zone_info": false, 00:08:34.870 "zone_management": false, 00:08:34.870 "zone_append": false, 00:08:34.870 "compare": true, 00:08:34.870 "compare_and_write": true, 00:08:34.870 "abort": true, 00:08:34.870 "seek_hole": false, 00:08:34.870 "seek_data": false, 00:08:34.870 "copy": true, 00:08:34.870 "nvme_iov_md": false 00:08:34.870 }, 00:08:34.870 "memory_domains": [ 00:08:34.870 { 00:08:34.870 "dma_device_id": "system", 00:08:34.870 "dma_device_type": 1 00:08:34.870 } 00:08:34.870 ], 00:08:34.870 "driver_specific": { 00:08:34.870 "nvme": [ 00:08:34.870 { 00:08:34.870 "trid": { 00:08:34.870 "trtype": "TCP", 00:08:34.870 "adrfam": "IPv4", 00:08:34.870 "traddr": "10.0.0.3", 00:08:34.870 "trsvcid": "4420", 00:08:34.870 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.870 }, 00:08:34.870 "ctrlr_data": { 00:08:34.870 "cntlid": 1, 00:08:34.870 "vendor_id": "0x8086", 00:08:34.870 "model_number": "SPDK bdev Controller", 00:08:34.870 "serial_number": "SPDK0", 00:08:34.870 "firmware_revision": "25.01", 00:08:34.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.870 "oacs": { 00:08:34.870 "security": 0, 00:08:34.870 "format": 0, 00:08:34.870 "firmware": 0, 00:08:34.870 "ns_manage": 0 00:08:34.870 }, 00:08:34.870 "multi_ctrlr": true, 00:08:34.870 "ana_reporting": false 00:08:34.870 }, 00:08:34.870 "vs": { 00:08:34.870 "nvme_version": "1.3" 00:08:34.870 }, 00:08:34.870 "ns_data": { 00:08:34.870 "id": 1, 00:08:34.870 "can_share": true 00:08:34.870 } 00:08:34.870 } 00:08:34.870 ], 00:08:34.870 "mp_policy": "active_passive" 00:08:34.870 } 00:08:34.870 } 00:08:34.870 ] 00:08:34.870 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63389 00:08:34.870 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.870 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.870 Running I/O for 10 seconds... 00:08:35.806 Latency(us) 00:08:35.806 [2024-11-29T12:55:07.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.807 Nvme0n1 : 1.00 6695.00 26.15 0.00 0.00 0.00 0.00 0.00 00:08:35.807 [2024-11-29T12:55:07.322Z] =================================================================================================================== 00:08:35.807 [2024-11-29T12:55:07.322Z] Total : 6695.00 26.15 0.00 0.00 0.00 0.00 0.00 00:08:35.807 00:08:36.752 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:36.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.752 Nvme0n1 : 2.00 6649.50 25.97 0.00 0.00 0.00 0.00 0.00 00:08:36.752 [2024-11-29T12:55:08.267Z] =================================================================================================================== 00:08:36.752 [2024-11-29T12:55:08.267Z] Total : 6649.50 25.97 0.00 0.00 0.00 0.00 0.00 00:08:36.752 00:08:37.011 true 00:08:37.011 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:37.011 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.270 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.270 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.270 12:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63389 00:08:37.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.837 Nvme0n1 : 3.00 6676.67 26.08 0.00 0.00 0.00 0.00 0.00 00:08:37.837 [2024-11-29T12:55:09.352Z] =================================================================================================================== 00:08:37.837 [2024-11-29T12:55:09.352Z] Total : 6676.67 26.08 0.00 0.00 0.00 0.00 0.00 00:08:37.837 00:08:38.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.771 Nvme0n1 : 4.00 6626.75 25.89 0.00 0.00 0.00 0.00 0.00 00:08:38.771 [2024-11-29T12:55:10.286Z] =================================================================================================================== 00:08:38.771 [2024-11-29T12:55:10.286Z] Total : 6626.75 25.89 0.00 0.00 0.00 0.00 0.00 00:08:38.771 00:08:40.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.147 Nvme0n1 : 5.00 6596.80 25.77 0.00 0.00 0.00 0.00 0.00 00:08:40.147 [2024-11-29T12:55:11.662Z] =================================================================================================================== 00:08:40.147 [2024-11-29T12:55:11.662Z] Total : 6596.80 25.77 0.00 0.00 0.00 0.00 0.00 00:08:40.147 00:08:41.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.083 Nvme0n1 : 6.00 6460.50 25.24 0.00 0.00 0.00 0.00 0.00 00:08:41.083 [2024-11-29T12:55:12.598Z] =================================================================================================================== 00:08:41.083 [2024-11-29T12:55:12.598Z] Total : 6460.50 25.24 0.00 0.00 0.00 0.00 0.00 00:08:41.083 00:08:42.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.017 Nvme0n1 : 7.00 6462.86 25.25 0.00 0.00 0.00 0.00 0.00 00:08:42.017 [2024-11-29T12:55:13.532Z] =================================================================================================================== 00:08:42.017 [2024-11-29T12:55:13.532Z] Total : 6462.86 25.25 0.00 0.00 0.00 0.00 0.00 00:08:42.017 00:08:42.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.949 Nvme0n1 : 8.00 6448.75 25.19 0.00 0.00 0.00 0.00 0.00 00:08:42.949 [2024-11-29T12:55:14.464Z] =================================================================================================================== 00:08:42.949 [2024-11-29T12:55:14.464Z] Total : 6448.75 25.19 0.00 0.00 0.00 0.00 0.00 00:08:42.949 00:08:43.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.884 Nvme0n1 : 9.00 6437.78 25.15 0.00 0.00 0.00 0.00 0.00 00:08:43.884 [2024-11-29T12:55:15.399Z] =================================================================================================================== 00:08:43.884 [2024-11-29T12:55:15.399Z] Total : 6437.78 25.15 0.00 0.00 0.00 0.00 0.00 00:08:43.884 00:08:44.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.820 Nvme0n1 : 10.00 6441.70 25.16 0.00 0.00 0.00 0.00 0.00 00:08:44.820 [2024-11-29T12:55:16.335Z] =================================================================================================================== 00:08:44.820 [2024-11-29T12:55:16.335Z] Total : 6441.70 25.16 0.00 0.00 0.00 0.00 0.00 00:08:44.820 00:08:44.820 00:08:44.820 Latency(us) 00:08:44.820 [2024-11-29T12:55:16.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.820 Nvme0n1 : 10.01 6449.97 25.20 0.00 0.00 19839.75 6017.40 133455.13 00:08:44.820 [2024-11-29T12:55:16.335Z] =================================================================================================================== 00:08:44.820 [2024-11-29T12:55:16.335Z] Total : 6449.97 25.20 0.00 0.00 19839.75 6017.40 133455.13 00:08:44.820 { 00:08:44.820 "results": [ 00:08:44.820 { 00:08:44.820 "job": "Nvme0n1", 00:08:44.820 "core_mask": "0x2", 00:08:44.820 "workload": "randwrite", 00:08:44.820 "status": "finished", 00:08:44.820 "queue_depth": 128, 00:08:44.820 "io_size": 4096, 00:08:44.820 "runtime": 10.007016, 00:08:44.820 "iops": 6449.974697752057, 00:08:44.820 "mibps": 25.195213663093973, 00:08:44.820 "io_failed": 0, 00:08:44.820 "io_timeout": 0, 00:08:44.820 "avg_latency_us": 19839.749846576386, 00:08:44.820 "min_latency_us": 6017.396363636363, 00:08:44.820 "max_latency_us": 133455.12727272726 00:08:44.820 } 00:08:44.820 ], 00:08:44.820 "core_count": 1 00:08:44.820 } 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63373 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63373 ']' 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63373 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63373 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.820 killing process with pid 63373 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63373' 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63373 00:08:44.820 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.820 00:08:44.820 Latency(us) 00:08:44.820 [2024-11-29T12:55:16.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.820 [2024-11-29T12:55:16.335Z] =================================================================================================================== 00:08:44.820 [2024-11-29T12:55:16.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.820 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63373 00:08:45.078 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:45.336 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.904 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.904 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:45.904 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.904 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.904 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.163 [2024-11-29 12:55:17.601528] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.163 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:46.422 request: 00:08:46.422 { 00:08:46.422 "uuid": "f6ef52ba-dbf6-4726-952d-6bbeb95e3eac", 00:08:46.422 "method": "bdev_lvol_get_lvstores", 00:08:46.422 "req_id": 1 00:08:46.422 } 00:08:46.422 Got JSON-RPC error response 00:08:46.422 response: 00:08:46.422 { 00:08:46.422 "code": -19, 00:08:46.422 "message": "No such device" 00:08:46.422 } 00:08:46.680 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:46.680 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.680 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.680 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.680 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.680 aio_bdev 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6680c0f6-5eab-4692-903e-797fe1b40915 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6680c0f6-5eab-4692-903e-797fe1b40915 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.680 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.248 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6680c0f6-5eab-4692-903e-797fe1b40915 -t 2000 00:08:47.248 [ 00:08:47.248 { 00:08:47.248 "name": "6680c0f6-5eab-4692-903e-797fe1b40915", 00:08:47.248 "aliases": [ 00:08:47.248 "lvs/lvol" 00:08:47.248 ], 00:08:47.248 "product_name": "Logical Volume", 00:08:47.248 "block_size": 4096, 00:08:47.248 "num_blocks": 38912, 00:08:47.248 "uuid": "6680c0f6-5eab-4692-903e-797fe1b40915", 00:08:47.248 "assigned_rate_limits": { 00:08:47.248 "rw_ios_per_sec": 0, 00:08:47.248 "rw_mbytes_per_sec": 0, 00:08:47.248 "r_mbytes_per_sec": 0, 00:08:47.248 "w_mbytes_per_sec": 0 00:08:47.248 }, 00:08:47.248 "claimed": false, 00:08:47.248 "zoned": false, 00:08:47.248 "supported_io_types": { 00:08:47.248 "read": true, 00:08:47.248 "write": true, 00:08:47.248 "unmap": true, 00:08:47.248 "flush": false, 00:08:47.248 "reset": true, 00:08:47.248 "nvme_admin": false, 00:08:47.248 "nvme_io": false, 00:08:47.248 "nvme_io_md": false, 00:08:47.248 "write_zeroes": true, 00:08:47.248 "zcopy": false, 00:08:47.248 "get_zone_info": false, 00:08:47.248 "zone_management": false, 00:08:47.248 "zone_append": false, 00:08:47.248 "compare": false, 00:08:47.248 "compare_and_write": false, 00:08:47.248 "abort": false, 00:08:47.248 "seek_hole": true, 00:08:47.248 "seek_data": true, 00:08:47.248 "copy": false, 00:08:47.248 "nvme_iov_md": false 00:08:47.248 }, 00:08:47.248 "driver_specific": { 00:08:47.248 "lvol": { 00:08:47.248 "lvol_store_uuid": "f6ef52ba-dbf6-4726-952d-6bbeb95e3eac", 00:08:47.248 "base_bdev": "aio_bdev", 00:08:47.248 "thin_provision": false, 00:08:47.248 "num_allocated_clusters": 38, 00:08:47.248 "snapshot": false, 00:08:47.248 "clone": false, 00:08:47.248 "esnap_clone": false 00:08:47.248 } 00:08:47.248 } 00:08:47.248 } 00:08:47.248 ] 00:08:47.248 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:47.522 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:47.522 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.522 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.522 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:47.522 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.101 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.101 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6680c0f6-5eab-4692-903e-797fe1b40915 00:08:48.101 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6ef52ba-dbf6-4726-952d-6bbeb95e3eac 00:08:48.360 12:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.620 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.188 ************************************ 00:08:49.188 END TEST lvs_grow_clean 00:08:49.188 ************************************ 00:08:49.188 00:08:49.188 real 0m18.307s 00:08:49.188 user 0m17.167s 00:08:49.188 sys 0m2.571s 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.188 ************************************ 00:08:49.188 START TEST lvs_grow_dirty 00:08:49.188 ************************************ 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.188 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.448 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.448 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:49.707 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dce26f15-faea-4c53-a2aa-9c5d533c620e 00:08:49.707 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:08:49.707 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.274 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.274 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.274 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dce26f15-faea-4c53-a2aa-9c5d533c620e lvol 150 00:08:50.531 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:08:50.531 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.531 12:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:50.788 [2024-11-29 12:55:22.101831] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:50.788 [2024-11-29 12:55:22.102205] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:50.788 true 00:08:50.788 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:08:50.788 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.046 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.046 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.304 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:08:51.568 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:51.826 [2024-11-29 12:55:23.190522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:51.826 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63647 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63647 /var/tmp/bdevperf.sock 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63647 ']' 00:08:52.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.084 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.085 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:52.085 [2024-11-29 12:55:23.527172] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:08:52.085 [2024-11-29 12:55:23.527515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63647 ] 00:08:52.344 [2024-11-29 12:55:23.680421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.344 [2024-11-29 12:55:23.744300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.344 [2024-11-29 12:55:23.804842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.602 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.602 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:52.602 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.859 Nvme0n1 00:08:52.859 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:53.117 [ 00:08:53.117 { 00:08:53.117 "name": "Nvme0n1", 00:08:53.117 "aliases": [ 00:08:53.117 "004cf387-dc39-4b2a-8db1-5d5446d5cda7" 00:08:53.117 ], 00:08:53.117 "product_name": "NVMe disk", 00:08:53.117 "block_size": 4096, 00:08:53.117 "num_blocks": 38912, 00:08:53.117 "uuid": "004cf387-dc39-4b2a-8db1-5d5446d5cda7", 00:08:53.117 "numa_id": -1, 00:08:53.117 "assigned_rate_limits": { 00:08:53.117 "rw_ios_per_sec": 0, 00:08:53.117 "rw_mbytes_per_sec": 0, 00:08:53.117 "r_mbytes_per_sec": 0, 00:08:53.117 "w_mbytes_per_sec": 0 00:08:53.117 }, 00:08:53.117 "claimed": false, 00:08:53.117 "zoned": false, 00:08:53.117 "supported_io_types": { 00:08:53.117 "read": true, 00:08:53.117 "write": true, 00:08:53.117 "unmap": true, 00:08:53.117 "flush": true, 00:08:53.117 "reset": true, 00:08:53.117 "nvme_admin": true, 00:08:53.117 "nvme_io": true, 00:08:53.117 "nvme_io_md": false, 00:08:53.117 "write_zeroes": true, 00:08:53.117 "zcopy": false, 00:08:53.117 "get_zone_info": false, 00:08:53.117 "zone_management": false, 00:08:53.117 "zone_append": false, 00:08:53.117 "compare": true, 00:08:53.117 "compare_and_write": true, 00:08:53.117 "abort": true, 00:08:53.117 "seek_hole": false, 00:08:53.117 "seek_data": false, 00:08:53.117 "copy": true, 00:08:53.117 "nvme_iov_md": false 00:08:53.117 }, 00:08:53.117 "memory_domains": [ 00:08:53.117 { 00:08:53.117 "dma_device_id": "system", 00:08:53.117 "dma_device_type": 1 00:08:53.117 } 00:08:53.117 ], 00:08:53.117 "driver_specific": { 00:08:53.117 "nvme": [ 00:08:53.117 { 00:08:53.117 "trid": { 00:08:53.117 "trtype": "TCP", 00:08:53.117 "adrfam": "IPv4", 00:08:53.117 "traddr": "10.0.0.3", 00:08:53.117 "trsvcid": "4420", 00:08:53.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:53.117 }, 00:08:53.117 "ctrlr_data": { 00:08:53.117 "cntlid": 1, 00:08:53.117 "vendor_id": "0x8086", 00:08:53.117 "model_number": "SPDK bdev Controller", 00:08:53.117 "serial_number": "SPDK0", 00:08:53.117 "firmware_revision": "25.01", 00:08:53.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.117 "oacs": { 00:08:53.117 "security": 0, 00:08:53.117 "format": 0, 00:08:53.117 "firmware": 0, 00:08:53.117 "ns_manage": 0 00:08:53.117 }, 00:08:53.117 "multi_ctrlr": true, 00:08:53.117 "ana_reporting": false 00:08:53.117 }, 00:08:53.117 "vs": { 00:08:53.117 "nvme_version": "1.3" 00:08:53.117 }, 00:08:53.117 "ns_data": { 00:08:53.117 "id": 1, 00:08:53.117 "can_share": true 00:08:53.117 } 00:08:53.117 } 00:08:53.117 ], 00:08:53.117 "mp_policy": "active_passive" 00:08:53.117 } 00:08:53.117 } 00:08:53.117 ] 00:08:53.117 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63663 00:08:53.117 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:53.117 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.374 Running I/O for 10 seconds... 00:08:54.305 Latency(us) 00:08:54.305 [2024-11-29T12:55:25.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.305 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:54.305 [2024-11-29T12:55:25.820Z] =================================================================================================================== 00:08:54.305 [2024-11-29T12:55:25.820Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:54.305 00:08:55.239 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:08:55.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.239 Nvme0n1 : 2.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:55.239 [2024-11-29T12:55:26.754Z] =================================================================================================================== 00:08:55.239 [2024-11-29T12:55:26.754Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:55.239 00:08:55.497 true 00:08:55.497 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:08:55.497 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:55.756 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:55.756 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:55.756 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63663 00:08:56.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.323 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:56.323 [2024-11-29T12:55:27.838Z] =================================================================================================================== 00:08:56.323 [2024-11-29T12:55:27.838Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:56.323 00:08:57.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.258 Nvme0n1 : 4.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:57.258 [2024-11-29T12:55:28.773Z] =================================================================================================================== 00:08:57.258 [2024-11-29T12:55:28.773Z] Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:57.258 00:08:58.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.260 Nvme0n1 : 5.00 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:08:58.260 [2024-11-29T12:55:29.775Z] =================================================================================================================== 00:08:58.260 [2024-11-29T12:55:29.775Z] Total : 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:08:58.260 00:08:59.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.196 Nvme0n1 : 6.00 6518.17 25.46 0.00 0.00 0.00 0.00 0.00 00:08:59.196 [2024-11-29T12:55:30.711Z] =================================================================================================================== 00:08:59.196 [2024-11-29T12:55:30.711Z] Total : 6518.17 25.46 0.00 0.00 0.00 0.00 0.00 00:08:59.196 00:09:00.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.575 Nvme0n1 : 7.00 6494.14 25.37 0.00 0.00 0.00 0.00 0.00 00:09:00.575 [2024-11-29T12:55:32.090Z] =================================================================================================================== 00:09:00.575 [2024-11-29T12:55:32.090Z] Total : 6494.14 25.37 0.00 0.00 0.00 0.00 0.00 00:09:00.575 00:09:01.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.512 Nvme0n1 : 8.00 6492.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:01.512 [2024-11-29T12:55:33.027Z] =================================================================================================================== 00:09:01.512 [2024-11-29T12:55:33.027Z] Total : 6492.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:01.512 00:09:02.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.449 Nvme0n1 : 9.00 6476.22 25.30 0.00 0.00 0.00 0.00 0.00 00:09:02.449 [2024-11-29T12:55:33.964Z] =================================================================================================================== 00:09:02.449 [2024-11-29T12:55:33.964Z] Total : 6476.22 25.30 0.00 0.00 0.00 0.00 0.00 00:09:02.449 00:09:03.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.390 Nvme0n1 : 10.00 6450.90 25.20 0.00 0.00 0.00 0.00 0.00 00:09:03.390 [2024-11-29T12:55:34.905Z] =================================================================================================================== 00:09:03.390 [2024-11-29T12:55:34.905Z] Total : 6450.90 25.20 0.00 0.00 0.00 0.00 0.00 00:09:03.390 00:09:03.390 00:09:03.390 Latency(us) 00:09:03.390 [2024-11-29T12:55:34.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.390 Nvme0n1 : 10.02 6453.47 25.21 0.00 0.00 19827.97 9532.51 175398.17 00:09:03.390 [2024-11-29T12:55:34.905Z] =================================================================================================================== 00:09:03.390 [2024-11-29T12:55:34.905Z] Total : 6453.47 25.21 0.00 0.00 19827.97 9532.51 175398.17 00:09:03.390 { 00:09:03.390 "results": [ 00:09:03.390 { 00:09:03.390 "job": "Nvme0n1", 00:09:03.390 "core_mask": "0x2", 00:09:03.390 "workload": "randwrite", 00:09:03.390 "status": "finished", 00:09:03.390 "queue_depth": 128, 00:09:03.390 "io_size": 4096, 00:09:03.390 "runtime": 10.015857, 00:09:03.390 "iops": 6453.46673779388, 00:09:03.390 "mibps": 25.208854444507345, 00:09:03.390 "io_failed": 0, 00:09:03.390 "io_timeout": 0, 00:09:03.390 "avg_latency_us": 19827.970919667456, 00:09:03.390 "min_latency_us": 9532.50909090909, 00:09:03.390 "max_latency_us": 175398.16727272727 00:09:03.390 } 00:09:03.390 ], 00:09:03.390 "core_count": 1 00:09:03.390 } 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63647 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63647 ']' 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63647 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63647 00:09:03.390 killing process with pid 63647 00:09:03.390 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.390 00:09:03.390 Latency(us) 00:09:03.390 [2024-11-29T12:55:34.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.390 [2024-11-29T12:55:34.905Z] =================================================================================================================== 00:09:03.390 [2024-11-29T12:55:34.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63647' 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63647 00:09:03.390 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63647 00:09:03.650 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.909 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.168 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:04.168 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:04.427 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:04.427 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:04.427 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63279 00:09:04.427 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63279 00:09:04.685 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63279 Killed "${NVMF_APP[@]}" "$@" 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63796 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63796 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63796 ']' 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.685 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.685 [2024-11-29 12:55:36.028413] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:04.685 [2024-11-29 12:55:36.028547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.685 [2024-11-29 12:55:36.177729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.944 [2024-11-29 12:55:36.251922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.944 [2024-11-29 12:55:36.252237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.944 [2024-11-29 12:55:36.252257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.944 [2024-11-29 12:55:36.252267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.944 [2024-11-29 12:55:36.252276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.944 [2024-11-29 12:55:36.252757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.944 [2024-11-29 12:55:36.316000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.880 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.139 [2024-11-29 12:55:37.423090] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:06.139 [2024-11-29 12:55:37.423430] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:06.139 [2024-11-29 12:55:37.423797] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.139 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.397 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 004cf387-dc39-4b2a-8db1-5d5446d5cda7 -t 2000 00:09:06.655 [ 00:09:06.655 { 00:09:06.655 "name": "004cf387-dc39-4b2a-8db1-5d5446d5cda7", 00:09:06.655 "aliases": [ 00:09:06.655 "lvs/lvol" 00:09:06.655 ], 00:09:06.655 "product_name": "Logical Volume", 00:09:06.655 "block_size": 4096, 00:09:06.655 "num_blocks": 38912, 00:09:06.655 "uuid": "004cf387-dc39-4b2a-8db1-5d5446d5cda7", 00:09:06.655 "assigned_rate_limits": { 00:09:06.655 "rw_ios_per_sec": 0, 00:09:06.655 "rw_mbytes_per_sec": 0, 00:09:06.655 "r_mbytes_per_sec": 0, 00:09:06.655 "w_mbytes_per_sec": 0 00:09:06.655 }, 00:09:06.655 "claimed": false, 00:09:06.655 "zoned": false, 00:09:06.655 "supported_io_types": { 00:09:06.655 "read": true, 00:09:06.655 "write": true, 00:09:06.655 "unmap": true, 00:09:06.655 "flush": false, 00:09:06.655 "reset": true, 00:09:06.655 "nvme_admin": false, 00:09:06.655 "nvme_io": false, 00:09:06.655 "nvme_io_md": false, 00:09:06.655 "write_zeroes": true, 00:09:06.655 "zcopy": false, 00:09:06.655 "get_zone_info": false, 00:09:06.655 "zone_management": false, 00:09:06.655 "zone_append": false, 00:09:06.655 "compare": false, 00:09:06.655 "compare_and_write": false, 00:09:06.655 "abort": false, 00:09:06.655 "seek_hole": true, 00:09:06.655 "seek_data": true, 00:09:06.655 "copy": false, 00:09:06.655 "nvme_iov_md": false 00:09:06.655 }, 00:09:06.655 "driver_specific": { 00:09:06.655 "lvol": { 00:09:06.656 "lvol_store_uuid": "dce26f15-faea-4c53-a2aa-9c5d533c620e", 00:09:06.656 "base_bdev": "aio_bdev", 00:09:06.656 "thin_provision": false, 00:09:06.656 "num_allocated_clusters": 38, 00:09:06.656 "snapshot": false, 00:09:06.656 "clone": false, 00:09:06.656 "esnap_clone": false 00:09:06.656 } 00:09:06.656 } 00:09:06.656 } 00:09:06.656 ] 00:09:06.656 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:06.656 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:06.656 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:06.914 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:06.914 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:06.914 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:07.496 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:07.496 12:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.763 [2024-11-29 12:55:39.036601] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:07.764 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:08.022 request: 00:09:08.022 { 00:09:08.022 "uuid": "dce26f15-faea-4c53-a2aa-9c5d533c620e", 00:09:08.022 "method": "bdev_lvol_get_lvstores", 00:09:08.022 "req_id": 1 00:09:08.022 } 00:09:08.022 Got JSON-RPC error response 00:09:08.022 response: 00:09:08.022 { 00:09:08.022 "code": -19, 00:09:08.022 "message": "No such device" 00:09:08.022 } 00:09:08.022 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:08.022 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.022 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.022 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.022 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.281 aio_bdev 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.281 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.539 12:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 004cf387-dc39-4b2a-8db1-5d5446d5cda7 -t 2000 00:09:08.798 [ 00:09:08.798 { 00:09:08.798 "name": "004cf387-dc39-4b2a-8db1-5d5446d5cda7", 00:09:08.798 "aliases": [ 00:09:08.798 "lvs/lvol" 00:09:08.798 ], 00:09:08.798 "product_name": "Logical Volume", 00:09:08.798 "block_size": 4096, 00:09:08.798 "num_blocks": 38912, 00:09:08.798 "uuid": "004cf387-dc39-4b2a-8db1-5d5446d5cda7", 00:09:08.798 "assigned_rate_limits": { 00:09:08.798 "rw_ios_per_sec": 0, 00:09:08.798 "rw_mbytes_per_sec": 0, 00:09:08.798 "r_mbytes_per_sec": 0, 00:09:08.798 "w_mbytes_per_sec": 0 00:09:08.798 }, 00:09:08.798 "claimed": false, 00:09:08.798 "zoned": false, 00:09:08.798 "supported_io_types": { 00:09:08.798 "read": true, 00:09:08.798 "write": true, 00:09:08.798 "unmap": true, 00:09:08.798 "flush": false, 00:09:08.798 "reset": true, 00:09:08.798 "nvme_admin": false, 00:09:08.798 "nvme_io": false, 00:09:08.798 "nvme_io_md": false, 00:09:08.798 "write_zeroes": true, 00:09:08.798 "zcopy": false, 00:09:08.798 "get_zone_info": false, 00:09:08.798 "zone_management": false, 00:09:08.798 "zone_append": false, 00:09:08.798 "compare": false, 00:09:08.798 "compare_and_write": false, 00:09:08.798 "abort": false, 00:09:08.798 "seek_hole": true, 00:09:08.798 "seek_data": true, 00:09:08.798 "copy": false, 00:09:08.798 "nvme_iov_md": false 00:09:08.798 }, 00:09:08.798 "driver_specific": { 00:09:08.798 "lvol": { 00:09:08.798 "lvol_store_uuid": "dce26f15-faea-4c53-a2aa-9c5d533c620e", 00:09:08.798 "base_bdev": "aio_bdev", 00:09:08.798 "thin_provision": false, 00:09:08.798 "num_allocated_clusters": 38, 00:09:08.798 "snapshot": false, 00:09:08.798 "clone": false, 00:09:08.798 "esnap_clone": false 00:09:08.798 } 00:09:08.798 } 00:09:08.798 } 00:09:08.798 ] 00:09:08.798 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:08.798 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:08.799 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.058 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.058 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:09.058 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.316 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.316 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 004cf387-dc39-4b2a-8db1-5d5446d5cda7 00:09:09.883 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dce26f15-faea-4c53-a2aa-9c5d533c620e 00:09:10.142 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.400 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:10.659 ************************************ 00:09:10.659 END TEST lvs_grow_dirty 00:09:10.659 ************************************ 00:09:10.659 00:09:10.659 real 0m21.567s 00:09:10.659 user 0m43.249s 00:09:10.659 sys 0m8.946s 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:10.659 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:10.917 nvmf_trace.0 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.917 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.175 rmmod nvme_tcp 00:09:11.175 rmmod nvme_fabrics 00:09:11.175 rmmod nvme_keyring 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63796 ']' 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63796 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63796 ']' 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63796 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.175 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63796 00:09:11.434 killing process with pid 63796 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63796' 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63796 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63796 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.434 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.693 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.693 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.693 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.693 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:11.693 00:09:11.693 real 0m43.117s 00:09:11.693 user 1m8.139s 00:09:11.693 sys 0m12.596s 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.693 ************************************ 00:09:11.693 END TEST nvmf_lvs_grow 00:09:11.693 ************************************ 00:09:11.693 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.953 ************************************ 00:09:11.953 START TEST nvmf_bdev_io_wait 00:09:11.953 ************************************ 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.953 * Looking for test storage... 00:09:11.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.953 --rc genhtml_branch_coverage=1 00:09:11.953 --rc genhtml_function_coverage=1 00:09:11.953 --rc genhtml_legend=1 00:09:11.953 --rc geninfo_all_blocks=1 00:09:11.953 --rc geninfo_unexecuted_blocks=1 00:09:11.953 00:09:11.953 ' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.953 --rc genhtml_branch_coverage=1 00:09:11.953 --rc genhtml_function_coverage=1 00:09:11.953 --rc genhtml_legend=1 00:09:11.953 --rc geninfo_all_blocks=1 00:09:11.953 --rc geninfo_unexecuted_blocks=1 00:09:11.953 00:09:11.953 ' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.953 --rc genhtml_branch_coverage=1 00:09:11.953 --rc genhtml_function_coverage=1 00:09:11.953 --rc genhtml_legend=1 00:09:11.953 --rc geninfo_all_blocks=1 00:09:11.953 --rc geninfo_unexecuted_blocks=1 00:09:11.953 00:09:11.953 ' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.953 --rc genhtml_branch_coverage=1 00:09:11.953 --rc genhtml_function_coverage=1 00:09:11.953 --rc genhtml_legend=1 00:09:11.953 --rc geninfo_all_blocks=1 00:09:11.953 --rc geninfo_unexecuted_blocks=1 00:09:11.953 00:09:11.953 ' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.953 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.954 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:12.213 Cannot find device "nvmf_init_br" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:12.213 Cannot find device "nvmf_init_br2" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:12.213 Cannot find device "nvmf_tgt_br" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.213 Cannot find device "nvmf_tgt_br2" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:12.213 Cannot find device "nvmf_init_br" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:12.213 Cannot find device "nvmf_init_br2" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:12.213 Cannot find device "nvmf_tgt_br" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:12.213 Cannot find device "nvmf_tgt_br2" 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:12.213 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:12.213 Cannot find device "nvmf_br" 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:12.214 Cannot find device "nvmf_init_if" 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:12.214 Cannot find device "nvmf_init_if2" 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.214 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.473 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 00:09:12.474 00:09:12.474 --- 10.0.0.3 ping statistics --- 00:09:12.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.474 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.474 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.474 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:09:12.474 00:09:12.474 --- 10.0.0.4 ping statistics --- 00:09:12.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.474 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:12.474 00:09:12.474 --- 10.0.0.1 ping statistics --- 00:09:12.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.474 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:09:12.474 00:09:12.474 --- 10.0.0.2 ping statistics --- 00:09:12.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.474 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64180 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64180 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64180 ']' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.474 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.474 [2024-11-29 12:55:43.977414] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:12.474 [2024-11-29 12:55:43.977504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.733 [2024-11-29 12:55:44.123486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.733 [2024-11-29 12:55:44.187204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.733 [2024-11-29 12:55:44.187259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.733 [2024-11-29 12:55:44.187270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.733 [2024-11-29 12:55:44.187278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.733 [2024-11-29 12:55:44.187286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.733 [2024-11-29 12:55:44.188458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.733 [2024-11-29 12:55:44.188613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.733 [2024-11-29 12:55:44.188802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.733 [2024-11-29 12:55:44.189215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 [2024-11-29 12:55:44.358389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 [2024-11-29 12:55:44.374659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 Malloc0 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 [2024-11-29 12:55:44.440601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64203 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64205 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.992 { 00:09:12.992 "params": { 00:09:12.992 "name": "Nvme$subsystem", 00:09:12.992 "trtype": "$TEST_TRANSPORT", 00:09:12.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.992 "adrfam": "ipv4", 00:09:12.992 "trsvcid": "$NVMF_PORT", 00:09:12.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.992 "hdgst": ${hdgst:-false}, 00:09:12.992 "ddgst": ${ddgst:-false} 00:09:12.992 }, 00:09:12.992 "method": "bdev_nvme_attach_controller" 00:09:12.992 } 00:09:12.992 EOF 00:09:12.992 )") 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64207 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.992 { 00:09:12.992 "params": { 00:09:12.992 "name": "Nvme$subsystem", 00:09:12.992 "trtype": "$TEST_TRANSPORT", 00:09:12.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.992 "adrfam": "ipv4", 00:09:12.992 "trsvcid": "$NVMF_PORT", 00:09:12.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.992 "hdgst": ${hdgst:-false}, 00:09:12.992 "ddgst": ${ddgst:-false} 00:09:12.992 }, 00:09:12.992 "method": "bdev_nvme_attach_controller" 00:09:12.992 } 00:09:12.992 EOF 00:09:12.992 )") 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.992 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.993 { 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme$subsystem", 00:09:12.993 "trtype": "$TEST_TRANSPORT", 00:09:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "$NVMF_PORT", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.993 "hdgst": ${hdgst:-false}, 00:09:12.993 "ddgst": ${ddgst:-false} 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 } 00:09:12.993 EOF 00:09:12.993 )") 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64210 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.993 { 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme$subsystem", 00:09:12.993 "trtype": "$TEST_TRANSPORT", 00:09:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "$NVMF_PORT", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.993 "hdgst": ${hdgst:-false}, 00:09:12.993 "ddgst": ${ddgst:-false} 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 } 00:09:12.993 EOF 00:09:12.993 )") 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme1", 00:09:12.993 "trtype": "tcp", 00:09:12.993 "traddr": "10.0.0.3", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "4420", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.993 "hdgst": false, 00:09:12.993 "ddgst": false 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 }' 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme1", 00:09:12.993 "trtype": "tcp", 00:09:12.993 "traddr": "10.0.0.3", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "4420", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.993 "hdgst": false, 00:09:12.993 "ddgst": false 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 }' 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme1", 00:09:12.993 "trtype": "tcp", 00:09:12.993 "traddr": "10.0.0.3", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "4420", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.993 "hdgst": false, 00:09:12.993 "ddgst": false 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 }' 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.993 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.993 "params": { 00:09:12.993 "name": "Nvme1", 00:09:12.993 "trtype": "tcp", 00:09:12.993 "traddr": "10.0.0.3", 00:09:12.993 "adrfam": "ipv4", 00:09:12.993 "trsvcid": "4420", 00:09:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.993 "hdgst": false, 00:09:12.993 "ddgst": false 00:09:12.993 }, 00:09:12.993 "method": "bdev_nvme_attach_controller" 00:09:12.993 }' 00:09:13.251 [2024-11-29 12:55:44.513008] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:13.251 [2024-11-29 12:55:44.513121] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:13.251 [2024-11-29 12:55:44.516101] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:13.251 [2024-11-29 12:55:44.516191] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:13.251 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64203 00:09:13.251 [2024-11-29 12:55:44.548315] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:13.251 [2024-11-29 12:55:44.548412] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:13.251 [2024-11-29 12:55:44.579474] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:13.251 [2024-11-29 12:55:44.579603] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:13.251 [2024-11-29 12:55:44.745554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.510 [2024-11-29 12:55:44.803434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:13.510 [2024-11-29 12:55:44.817528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.510 [2024-11-29 12:55:44.831575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.510 [2024-11-29 12:55:44.897372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:13.510 [2024-11-29 12:55:44.916643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.510 [2024-11-29 12:55:44.917063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.510 Running I/O for 1 seconds... 00:09:13.510 [2024-11-29 12:55:44.979921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.510 [2024-11-29 12:55:44.987026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:13.510 [2024-11-29 12:55:45.000969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.769 [2024-11-29 12:55:45.038564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:13.769 Running I/O for 1 seconds... 00:09:13.769 [2024-11-29 12:55:45.052561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.769 Running I/O for 1 seconds... 00:09:13.769 Running I/O for 1 seconds... 00:09:14.700 4724.00 IOPS, 18.45 MiB/s 00:09:14.700 Latency(us) 00:09:14.700 [2024-11-29T12:55:46.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.700 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:14.700 Nvme1n1 : 1.04 4666.29 18.23 0.00 0.00 26874.24 5630.14 50998.92 00:09:14.700 [2024-11-29T12:55:46.215Z] =================================================================================================================== 00:09:14.700 [2024-11-29T12:55:46.215Z] Total : 4666.29 18.23 0.00 0.00 26874.24 5630.14 50998.92 00:09:14.700 5896.00 IOPS, 23.03 MiB/s 00:09:14.700 Latency(us) 00:09:14.700 [2024-11-29T12:55:46.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.700 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:14.700 Nvme1n1 : 1.02 5928.09 23.16 0.00 0.00 21410.99 10962.39 31457.28 00:09:14.700 [2024-11-29T12:55:46.215Z] =================================================================================================================== 00:09:14.700 [2024-11-29T12:55:46.215Z] Total : 5928.09 23.16 0.00 0.00 21410.99 10962.39 31457.28 00:09:14.700 159224.00 IOPS, 621.97 MiB/s 00:09:14.700 Latency(us) 00:09:14.700 [2024-11-29T12:55:46.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.700 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:14.700 Nvme1n1 : 1.00 158817.01 620.38 0.00 0.00 801.27 491.52 2546.97 00:09:14.700 [2024-11-29T12:55:46.215Z] =================================================================================================================== 00:09:14.700 [2024-11-29T12:55:46.215Z] Total : 158817.01 620.38 0.00 0.00 801.27 491.52 2546.97 00:09:14.700 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64205 00:09:14.700 5010.00 IOPS, 19.57 MiB/s 00:09:14.700 Latency(us) 00:09:14.701 [2024-11-29T12:55:46.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.701 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:14.701 Nvme1n1 : 1.01 5151.86 20.12 0.00 0.00 24766.60 5213.09 67204.19 00:09:14.701 [2024-11-29T12:55:46.216Z] =================================================================================================================== 00:09:14.701 [2024-11-29T12:55:46.216Z] Total : 5151.86 20.12 0.00 0.00 24766.60 5213.09 67204.19 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64207 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64210 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.959 rmmod nvme_tcp 00:09:14.959 rmmod nvme_fabrics 00:09:14.959 rmmod nvme_keyring 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64180 ']' 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64180 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64180 ']' 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64180 00:09:14.959 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64180 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.218 killing process with pid 64180 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64180' 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64180 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64180 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.218 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:15.219 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.478 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.738 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:15.738 00:09:15.738 real 0m3.765s 00:09:15.738 user 0m14.985s 00:09:15.738 sys 0m2.150s 00:09:15.738 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.738 ************************************ 00:09:15.738 12:55:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.738 END TEST nvmf_bdev_io_wait 00:09:15.738 ************************************ 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.738 ************************************ 00:09:15.738 START TEST nvmf_queue_depth 00:09:15.738 ************************************ 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.738 * Looking for test storage... 00:09:15.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:15.738 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.998 --rc genhtml_branch_coverage=1 00:09:15.998 --rc genhtml_function_coverage=1 00:09:15.998 --rc genhtml_legend=1 00:09:15.998 --rc geninfo_all_blocks=1 00:09:15.998 --rc geninfo_unexecuted_blocks=1 00:09:15.998 00:09:15.998 ' 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.998 --rc genhtml_branch_coverage=1 00:09:15.998 --rc genhtml_function_coverage=1 00:09:15.998 --rc genhtml_legend=1 00:09:15.998 --rc geninfo_all_blocks=1 00:09:15.998 --rc geninfo_unexecuted_blocks=1 00:09:15.998 00:09:15.998 ' 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.998 --rc genhtml_branch_coverage=1 00:09:15.998 --rc genhtml_function_coverage=1 00:09:15.998 --rc genhtml_legend=1 00:09:15.998 --rc geninfo_all_blocks=1 00:09:15.998 --rc geninfo_unexecuted_blocks=1 00:09:15.998 00:09:15.998 ' 00:09:15.998 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.998 --rc genhtml_branch_coverage=1 00:09:15.998 --rc genhtml_function_coverage=1 00:09:15.998 --rc genhtml_legend=1 00:09:15.998 --rc geninfo_all_blocks=1 00:09:15.998 --rc geninfo_unexecuted_blocks=1 00:09:15.999 00:09:15.999 ' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.999 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:15.999 Cannot find device "nvmf_init_br" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:15.999 Cannot find device "nvmf_init_br2" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:15.999 Cannot find device "nvmf_tgt_br" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.999 Cannot find device "nvmf_tgt_br2" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:15.999 Cannot find device "nvmf_init_br" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:15.999 Cannot find device "nvmf_init_br2" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:15.999 Cannot find device "nvmf_tgt_br" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:15.999 Cannot find device "nvmf_tgt_br2" 00:09:15.999 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.000 Cannot find device "nvmf_br" 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.000 Cannot find device "nvmf_init_if" 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.000 Cannot find device "nvmf_init_if2" 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.000 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:16.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:16.259 00:09:16.259 --- 10.0.0.3 ping statistics --- 00:09:16.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.259 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:16.259 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:16.259 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:16.259 00:09:16.259 --- 10.0.0.4 ping statistics --- 00:09:16.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.259 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:16.259 00:09:16.259 --- 10.0.0.1 ping statistics --- 00:09:16.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.259 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:16.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:16.259 00:09:16.259 --- 10.0.0.2 ping statistics --- 00:09:16.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.259 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.259 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64469 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64469 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64469 ']' 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.260 12:55:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.573 [2024-11-29 12:55:47.789196] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:16.573 [2024-11-29 12:55:47.789595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.573 [2024-11-29 12:55:47.937454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.573 [2024-11-29 12:55:48.011420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.573 [2024-11-29 12:55:48.011483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.573 [2024-11-29 12:55:48.011521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.573 [2024-11-29 12:55:48.011549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.573 [2024-11-29 12:55:48.011560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.573 [2024-11-29 12:55:48.012116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.847 [2024-11-29 12:55:48.087067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 [2024-11-29 12:55:48.816027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 Malloc0 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.417 [2024-11-29 12:55:48.869495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64501 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64501 /var/tmp/bdevperf.sock 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64501 ']' 00:09:17.417 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.418 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.418 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.418 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.418 12:55:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.677 [2024-11-29 12:55:48.937585] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:17.677 [2024-11-29 12:55:48.938019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64501 ] 00:09:17.677 [2024-11-29 12:55:49.091413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.677 [2024-11-29 12:55:49.152290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.937 [2024-11-29 12:55:49.212886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.937 NVMe0n1 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.937 12:55:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.196 Running I/O for 10 seconds... 00:09:20.072 6516.00 IOPS, 25.45 MiB/s [2024-11-29T12:55:52.525Z] 7385.00 IOPS, 28.85 MiB/s [2024-11-29T12:55:53.903Z] 7536.33 IOPS, 29.44 MiB/s [2024-11-29T12:55:54.842Z] 7691.25 IOPS, 30.04 MiB/s [2024-11-29T12:55:55.780Z] 7627.60 IOPS, 29.80 MiB/s [2024-11-29T12:55:56.717Z] 7650.17 IOPS, 29.88 MiB/s [2024-11-29T12:55:57.653Z] 7621.71 IOPS, 29.77 MiB/s [2024-11-29T12:55:58.601Z] 7673.38 IOPS, 29.97 MiB/s [2024-11-29T12:55:59.552Z] 7752.00 IOPS, 30.28 MiB/s [2024-11-29T12:55:59.811Z] 7800.20 IOPS, 30.47 MiB/s 00:09:28.296 Latency(us) 00:09:28.296 [2024-11-29T12:55:59.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.296 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:28.296 Verification LBA range: start 0x0 length 0x4000 00:09:28.296 NVMe0n1 : 10.10 7819.76 30.55 0.00 0.00 130299.26 25737.77 105810.85 00:09:28.296 [2024-11-29T12:55:59.811Z] =================================================================================================================== 00:09:28.296 [2024-11-29T12:55:59.811Z] Total : 7819.76 30.55 0.00 0.00 130299.26 25737.77 105810.85 00:09:28.296 { 00:09:28.296 "results": [ 00:09:28.296 { 00:09:28.296 "job": "NVMe0n1", 00:09:28.296 "core_mask": "0x1", 00:09:28.296 "workload": "verify", 00:09:28.296 "status": "finished", 00:09:28.296 "verify_range": { 00:09:28.296 "start": 0, 00:09:28.296 "length": 16384 00:09:28.296 }, 00:09:28.296 "queue_depth": 1024, 00:09:28.296 "io_size": 4096, 00:09:28.296 "runtime": 10.104145, 00:09:28.296 "iops": 7819.761098044416, 00:09:28.296 "mibps": 30.545941789236, 00:09:28.296 "io_failed": 0, 00:09:28.296 "io_timeout": 0, 00:09:28.296 "avg_latency_us": 130299.2609078483, 00:09:28.296 "min_latency_us": 25737.774545454544, 00:09:28.296 "max_latency_us": 105810.8509090909 00:09:28.296 } 00:09:28.296 ], 00:09:28.296 "core_count": 1 00:09:28.296 } 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64501 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64501 ']' 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64501 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64501 00:09:28.296 killing process with pid 64501 00:09:28.296 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.296 00:09:28.296 Latency(us) 00:09:28.296 [2024-11-29T12:55:59.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.296 [2024-11-29T12:55:59.811Z] =================================================================================================================== 00:09:28.296 [2024-11-29T12:55:59.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64501' 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64501 00:09:28.296 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64501 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.556 rmmod nvme_tcp 00:09:28.556 rmmod nvme_fabrics 00:09:28.556 rmmod nvme_keyring 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64469 ']' 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64469 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64469 ']' 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64469 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.556 12:55:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64469 00:09:28.556 killing process with pid 64469 00:09:28.556 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:28.556 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:28.556 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64469' 00:09:28.556 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64469 00:09:28.556 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64469 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.816 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.075 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:29.075 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:29.076 00:09:29.076 real 0m13.467s 00:09:29.076 user 0m22.171s 00:09:29.076 sys 0m2.512s 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.076 ************************************ 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.076 END TEST nvmf_queue_depth 00:09:29.076 ************************************ 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.076 ************************************ 00:09:29.076 START TEST nvmf_target_multipath 00:09:29.076 ************************************ 00:09:29.076 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.336 * Looking for test storage... 00:09:29.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.336 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.337 --rc genhtml_branch_coverage=1 00:09:29.337 --rc genhtml_function_coverage=1 00:09:29.337 --rc genhtml_legend=1 00:09:29.337 --rc geninfo_all_blocks=1 00:09:29.337 --rc geninfo_unexecuted_blocks=1 00:09:29.337 00:09:29.337 ' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.337 --rc genhtml_branch_coverage=1 00:09:29.337 --rc genhtml_function_coverage=1 00:09:29.337 --rc genhtml_legend=1 00:09:29.337 --rc geninfo_all_blocks=1 00:09:29.337 --rc geninfo_unexecuted_blocks=1 00:09:29.337 00:09:29.337 ' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.337 --rc genhtml_branch_coverage=1 00:09:29.337 --rc genhtml_function_coverage=1 00:09:29.337 --rc genhtml_legend=1 00:09:29.337 --rc geninfo_all_blocks=1 00:09:29.337 --rc geninfo_unexecuted_blocks=1 00:09:29.337 00:09:29.337 ' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.337 --rc genhtml_branch_coverage=1 00:09:29.337 --rc genhtml_function_coverage=1 00:09:29.337 --rc genhtml_legend=1 00:09:29.337 --rc geninfo_all_blocks=1 00:09:29.337 --rc geninfo_unexecuted_blocks=1 00:09:29.337 00:09:29.337 ' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.337 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.338 Cannot find device "nvmf_init_br" 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.338 Cannot find device "nvmf_init_br2" 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:29.338 Cannot find device "nvmf_tgt_br" 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:29.338 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.597 Cannot find device "nvmf_tgt_br2" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:29.597 Cannot find device "nvmf_init_br" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:29.597 Cannot find device "nvmf_init_br2" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:29.597 Cannot find device "nvmf_tgt_br" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:29.597 Cannot find device "nvmf_tgt_br2" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:29.597 Cannot find device "nvmf_br" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:29.597 Cannot find device "nvmf_init_if" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:29.597 Cannot find device "nvmf_init_if2" 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.597 12:56:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.597 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.598 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.598 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.598 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.598 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.598 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:29.857 00:09:29.857 --- 10.0.0.3 ping statistics --- 00:09:29.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.857 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:29.857 00:09:29.857 --- 10.0.0.4 ping statistics --- 00:09:29.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.857 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:29.857 00:09:29.857 --- 10.0.0.1 ping statistics --- 00:09:29.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.857 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:29.857 00:09:29.857 --- 10.0.0.2 ping statistics --- 00:09:29.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.857 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64871 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64871 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64871 ']' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.857 12:56:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.857 [2024-11-29 12:56:01.279168] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:29.857 [2024-11-29 12:56:01.279303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.116 [2024-11-29 12:56:01.436358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.116 [2024-11-29 12:56:01.520249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.116 [2024-11-29 12:56:01.520341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.116 [2024-11-29 12:56:01.520356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.116 [2024-11-29 12:56:01.520367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.116 [2024-11-29 12:56:01.520377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.116 [2024-11-29 12:56:01.521991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.116 [2024-11-29 12:56:01.522083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.116 [2024-11-29 12:56:01.522149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.116 [2024-11-29 12:56:01.522148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.116 [2024-11-29 12:56:01.598161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.052 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:31.311 [2024-11-29 12:56:02.735439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.311 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:31.577 Malloc0 00:09:31.577 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:31.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.212 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.471 [2024-11-29 12:56:03.850228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.471 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:32.729 [2024-11-29 12:56:04.094541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:32.729 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:32.989 12:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:34.889 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:34.889 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:34.889 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:35.146 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64966 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:35.147 12:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:35.147 [global] 00:09:35.147 thread=1 00:09:35.147 invalidate=1 00:09:35.147 rw=randrw 00:09:35.147 time_based=1 00:09:35.147 runtime=6 00:09:35.147 ioengine=libaio 00:09:35.147 direct=1 00:09:35.147 bs=4096 00:09:35.147 iodepth=128 00:09:35.147 norandommap=0 00:09:35.147 numjobs=1 00:09:35.147 00:09:35.147 verify_dump=1 00:09:35.147 verify_backlog=512 00:09:35.147 verify_state_save=0 00:09:35.147 do_verify=1 00:09:35.147 verify=crc32c-intel 00:09:35.147 [job0] 00:09:35.147 filename=/dev/nvme0n1 00:09:35.147 Could not set queue depth (nvme0n1) 00:09:35.147 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.147 fio-3.35 00:09:35.147 Starting 1 thread 00:09:36.079 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:36.338 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:36.596 12:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:36.854 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.113 12:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64966 00:09:41.305 00:09:41.305 job0: (groupid=0, jobs=1): err= 0: pid=64993: Fri Nov 29 12:56:12 2024 00:09:41.305 read: IOPS=9438, BW=36.9MiB/s (38.7MB/s)(221MiB/6007msec) 00:09:41.305 slat (usec): min=4, max=7181, avg=63.82, stdev=256.43 00:09:41.305 clat (usec): min=1896, max=20085, avg=9306.16, stdev=1671.35 00:09:41.305 lat (usec): min=1908, max=21052, avg=9369.98, stdev=1676.66 00:09:41.305 clat percentiles (usec): 00:09:41.305 | 1.00th=[ 4686], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8455], 00:09:41.305 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:09:41.305 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[13173], 00:09:41.305 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16057], 99.95th=[16909], 00:09:41.305 | 99.99th=[19268] 00:09:41.305 bw ( KiB/s): min= 6592, max=24864, per=50.74%, avg=19156.00, stdev=5014.91, samples=12 00:09:41.305 iops : min= 1648, max= 6216, avg=4789.00, stdev=1253.73, samples=12 00:09:41.305 write: IOPS=5497, BW=21.5MiB/s (22.5MB/s)(113MiB/5257msec); 0 zone resets 00:09:41.305 slat (usec): min=15, max=2386, avg=70.91, stdev=180.79 00:09:41.305 clat (usec): min=1386, max=19643, avg=8114.48, stdev=1580.91 00:09:41.305 lat (usec): min=1421, max=19668, avg=8185.39, stdev=1585.68 00:09:41.305 clat percentiles (usec): 00:09:41.305 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 6128], 20.00th=[ 7504], 00:09:41.305 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:41.305 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9896], 00:09:41.305 | 99.00th=[12911], 99.50th=[13960], 99.90th=[15926], 99.95th=[16909], 00:09:41.305 | 99.99th=[17695] 00:09:41.305 bw ( KiB/s): min= 7048, max=24336, per=87.43%, avg=19226.67, stdev=4758.45, samples=12 00:09:41.305 iops : min= 1762, max= 6084, avg=4806.67, stdev=1189.61, samples=12 00:09:41.305 lat (msec) : 2=0.02%, 4=1.09%, 10=84.04%, 20=14.84%, 50=0.01% 00:09:41.305 cpu : usr=5.64%, sys=19.85%, ctx=4914, majf=0, minf=90 00:09:41.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:41.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.305 issued rwts: total=56697,28899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.305 00:09:41.305 Run status group 0 (all jobs): 00:09:41.305 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=221MiB (232MB), run=6007-6007msec 00:09:41.305 WRITE: bw=21.5MiB/s (22.5MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=113MiB (118MB), run=5257-5257msec 00:09:41.305 00:09:41.305 Disk stats (read/write): 00:09:41.305 nvme0n1: ios=55885/28359, merge=0/0, ticks=499031/216410, in_queue=715441, util=98.53% 00:09:41.305 12:56:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65070 00:09:41.874 12:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:42.132 [global] 00:09:42.132 thread=1 00:09:42.132 invalidate=1 00:09:42.132 rw=randrw 00:09:42.132 time_based=1 00:09:42.132 runtime=6 00:09:42.132 ioengine=libaio 00:09:42.132 direct=1 00:09:42.132 bs=4096 00:09:42.132 iodepth=128 00:09:42.132 norandommap=0 00:09:42.132 numjobs=1 00:09:42.132 00:09:42.132 verify_dump=1 00:09:42.132 verify_backlog=512 00:09:42.132 verify_state_save=0 00:09:42.132 do_verify=1 00:09:42.132 verify=crc32c-intel 00:09:42.132 [job0] 00:09:42.132 filename=/dev/nvme0n1 00:09:42.132 Could not set queue depth (nvme0n1) 00:09:42.132 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.133 fio-3.35 00:09:42.133 Starting 1 thread 00:09:43.067 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:43.326 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:43.585 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:43.843 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.110 12:56:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65070 00:09:48.360 00:09:48.360 job0: (groupid=0, jobs=1): err= 0: pid=65091: Fri Nov 29 12:56:19 2024 00:09:48.360 read: IOPS=9562, BW=37.4MiB/s (39.2MB/s)(224MiB/6007msec) 00:09:48.360 slat (usec): min=2, max=8184, avg=51.53, stdev=232.31 00:09:48.360 clat (usec): min=321, max=21130, avg=9194.57, stdev=2370.72 00:09:48.360 lat (usec): min=341, max=21181, avg=9246.10, stdev=2376.75 00:09:48.360 clat percentiles (usec): 00:09:48.360 | 1.00th=[ 2507], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 7963], 00:09:48.360 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:09:48.360 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11731], 95.00th=[13566], 00:09:48.360 | 99.00th=[16450], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:09:48.360 | 99.99th=[20841] 00:09:48.360 bw ( KiB/s): min= 9152, max=26360, per=51.47%, avg=19689.45, stdev=5920.42, samples=11 00:09:48.360 iops : min= 2288, max= 6590, avg=4922.36, stdev=1480.11, samples=11 00:09:48.360 write: IOPS=5623, BW=22.0MiB/s (23.0MB/s)(116MiB/5270msec); 0 zone resets 00:09:48.360 slat (usec): min=4, max=3034, avg=62.39, stdev=163.07 00:09:48.360 clat (usec): min=952, max=19154, avg=7902.94, stdev=1929.82 00:09:48.360 lat (usec): min=982, max=19179, avg=7965.33, stdev=1940.67 00:09:48.360 clat percentiles (usec): 00:09:48.360 | 1.00th=[ 2966], 5.00th=[ 4113], 10.00th=[ 4948], 20.00th=[ 6521], 00:09:48.360 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:09:48.360 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10290], 00:09:48.360 | 99.00th=[12780], 99.50th=[13829], 99.90th=[17171], 99.95th=[17433], 00:09:48.360 | 99.99th=[18220] 00:09:48.360 bw ( KiB/s): min= 9304, max=25648, per=87.70%, avg=19726.55, stdev=5691.49, samples=11 00:09:48.360 iops : min= 2326, max= 6412, avg=4931.64, stdev=1422.87, samples=11 00:09:48.360 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06% 00:09:48.360 lat (msec) : 2=0.38%, 4=2.75%, 10=76.53%, 20=20.22%, 50=0.03% 00:09:48.360 cpu : usr=5.54%, sys=20.28%, ctx=5027, majf=0, minf=114 00:09:48.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:48.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.360 issued rwts: total=57442,29634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.360 00:09:48.360 Run status group 0 (all jobs): 00:09:48.360 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=224MiB (235MB), run=6007-6007msec 00:09:48.360 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=116MiB (121MB), run=5270-5270msec 00:09:48.360 00:09:48.360 Disk stats (read/write): 00:09:48.360 nvme0n1: ios=56662/29165, merge=0/0, ticks=498929/216688, in_queue=715617, util=98.75% 00:09:48.360 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:48.360 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.360 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:48.361 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.619 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.878 rmmod nvme_tcp 00:09:48.878 rmmod nvme_fabrics 00:09:48.878 rmmod nvme_keyring 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64871 ']' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64871 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64871 ']' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64871 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64871 00:09:48.878 killing process with pid 64871 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64871' 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64871 00:09:48.878 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64871 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:49.137 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.395 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:49.396 00:09:49.396 real 0m20.155s 00:09:49.396 user 1m16.171s 00:09:49.396 sys 0m8.429s 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.396 ************************************ 00:09:49.396 END TEST nvmf_target_multipath 00:09:49.396 ************************************ 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.396 ************************************ 00:09:49.396 START TEST nvmf_zcopy 00:09:49.396 ************************************ 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.396 * Looking for test storage... 00:09:49.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.396 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.655 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.655 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.656 --rc genhtml_branch_coverage=1 00:09:49.656 --rc genhtml_function_coverage=1 00:09:49.656 --rc genhtml_legend=1 00:09:49.656 --rc geninfo_all_blocks=1 00:09:49.656 --rc geninfo_unexecuted_blocks=1 00:09:49.656 00:09:49.656 ' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.656 --rc genhtml_branch_coverage=1 00:09:49.656 --rc genhtml_function_coverage=1 00:09:49.656 --rc genhtml_legend=1 00:09:49.656 --rc geninfo_all_blocks=1 00:09:49.656 --rc geninfo_unexecuted_blocks=1 00:09:49.656 00:09:49.656 ' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.656 --rc genhtml_branch_coverage=1 00:09:49.656 --rc genhtml_function_coverage=1 00:09:49.656 --rc genhtml_legend=1 00:09:49.656 --rc geninfo_all_blocks=1 00:09:49.656 --rc geninfo_unexecuted_blocks=1 00:09:49.656 00:09:49.656 ' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.656 --rc genhtml_branch_coverage=1 00:09:49.656 --rc genhtml_function_coverage=1 00:09:49.656 --rc genhtml_legend=1 00:09:49.656 --rc geninfo_all_blocks=1 00:09:49.656 --rc geninfo_unexecuted_blocks=1 00:09:49.656 00:09:49.656 ' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.656 12:56:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.656 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:49.657 Cannot find device "nvmf_init_br" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:49.657 Cannot find device "nvmf_init_br2" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:49.657 Cannot find device "nvmf_tgt_br" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.657 Cannot find device "nvmf_tgt_br2" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:49.657 Cannot find device "nvmf_init_br" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:49.657 Cannot find device "nvmf_init_br2" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:49.657 Cannot find device "nvmf_tgt_br" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:49.657 Cannot find device "nvmf_tgt_br2" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:49.657 Cannot find device "nvmf_br" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:49.657 Cannot find device "nvmf_init_if" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:49.657 Cannot find device "nvmf_init_if2" 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.657 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:49.916 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:49.916 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:09:49.916 00:09:49.916 --- 10.0.0.3 ping statistics --- 00:09:49.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.916 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:49.916 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:49.916 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:09:49.916 00:09:49.916 --- 10.0.0.4 ping statistics --- 00:09:49.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.916 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:49.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:49.916 00:09:49.916 --- 10.0.0.1 ping statistics --- 00:09:49.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.916 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:49.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:09:49.916 00:09:49.916 --- 10.0.0.2 ping statistics --- 00:09:49.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.916 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.916 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65397 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65397 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65397 ']' 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.175 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.176 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.176 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.176 [2024-11-29 12:56:21.495194] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:50.176 [2024-11-29 12:56:21.495292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.176 [2024-11-29 12:56:21.651379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.435 [2024-11-29 12:56:21.716331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.435 [2024-11-29 12:56:21.716399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.435 [2024-11-29 12:56:21.716413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.435 [2024-11-29 12:56:21.716423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.435 [2024-11-29 12:56:21.716432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.435 [2024-11-29 12:56:21.716907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.435 [2024-11-29 12:56:21.781141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.435 [2024-11-29 12:56:21.915013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.435 [2024-11-29 12:56:21.939140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.435 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.694 malloc0 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.694 { 00:09:50.694 "params": { 00:09:50.694 "name": "Nvme$subsystem", 00:09:50.694 "trtype": "$TEST_TRANSPORT", 00:09:50.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.694 "adrfam": "ipv4", 00:09:50.694 "trsvcid": "$NVMF_PORT", 00:09:50.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.694 "hdgst": ${hdgst:-false}, 00:09:50.694 "ddgst": ${ddgst:-false} 00:09:50.694 }, 00:09:50.694 "method": "bdev_nvme_attach_controller" 00:09:50.694 } 00:09:50.694 EOF 00:09:50.694 )") 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:50.694 12:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:50.694 12:56:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:50.694 12:56:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.694 "params": { 00:09:50.694 "name": "Nvme1", 00:09:50.694 "trtype": "tcp", 00:09:50.694 "traddr": "10.0.0.3", 00:09:50.694 "adrfam": "ipv4", 00:09:50.694 "trsvcid": "4420", 00:09:50.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.694 "hdgst": false, 00:09:50.694 "ddgst": false 00:09:50.694 }, 00:09:50.694 "method": "bdev_nvme_attach_controller" 00:09:50.694 }' 00:09:50.694 [2024-11-29 12:56:22.051021] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:09:50.694 [2024-11-29 12:56:22.051119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65423 ] 00:09:50.694 [2024-11-29 12:56:22.204804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.953 [2024-11-29 12:56:22.274758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.953 [2024-11-29 12:56:22.345350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.211 Running I/O for 10 seconds... 00:09:53.083 5162.00 IOPS, 40.33 MiB/s [2024-11-29T12:56:25.544Z] 5527.50 IOPS, 43.18 MiB/s [2024-11-29T12:56:26.504Z] 5701.67 IOPS, 44.54 MiB/s [2024-11-29T12:56:27.883Z] 5815.00 IOPS, 45.43 MiB/s [2024-11-29T12:56:28.819Z] 5697.40 IOPS, 44.51 MiB/s [2024-11-29T12:56:29.757Z] 5693.50 IOPS, 44.48 MiB/s [2024-11-29T12:56:30.696Z] 5761.86 IOPS, 45.01 MiB/s [2024-11-29T12:56:31.634Z] 5809.50 IOPS, 45.39 MiB/s [2024-11-29T12:56:32.571Z] 5818.78 IOPS, 45.46 MiB/s [2024-11-29T12:56:32.571Z] 5818.40 IOPS, 45.46 MiB/s 00:10:01.056 Latency(us) 00:10:01.056 [2024-11-29T12:56:32.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:01.056 Verification LBA range: start 0x0 length 0x1000 00:10:01.056 Nvme1n1 : 10.02 5819.43 45.46 0.00 0.00 21923.09 2770.39 33602.09 00:10:01.056 [2024-11-29T12:56:32.572Z] =================================================================================================================== 00:10:01.057 [2024-11-29T12:56:32.572Z] Total : 5819.43 45.46 0.00 0.00 21923.09 2770.39 33602.09 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65540 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.316 { 00:10:01.316 "params": { 00:10:01.316 "name": "Nvme$subsystem", 00:10:01.316 "trtype": "$TEST_TRANSPORT", 00:10:01.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.316 "adrfam": "ipv4", 00:10:01.316 "trsvcid": "$NVMF_PORT", 00:10:01.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.316 "hdgst": ${hdgst:-false}, 00:10:01.316 "ddgst": ${ddgst:-false} 00:10:01.316 }, 00:10:01.316 "method": "bdev_nvme_attach_controller" 00:10:01.316 } 00:10:01.316 EOF 00:10:01.316 )") 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:01.316 [2024-11-29 12:56:32.704965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.705034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:01.316 12:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.316 "params": { 00:10:01.316 "name": "Nvme1", 00:10:01.316 "trtype": "tcp", 00:10:01.316 "traddr": "10.0.0.3", 00:10:01.316 "adrfam": "ipv4", 00:10:01.316 "trsvcid": "4420", 00:10:01.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.316 "hdgst": false, 00:10:01.316 "ddgst": false 00:10:01.316 }, 00:10:01.316 "method": "bdev_nvme_attach_controller" 00:10:01.316 }' 00:10:01.316 [2024-11-29 12:56:32.716874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.716915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.728873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.728909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.740871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.740904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.752881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.752916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.759335] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:10:01.316 [2024-11-29 12:56:32.759514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65540 ] 00:10:01.316 [2024-11-29 12:56:32.764874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.764923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.776875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.776908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.788879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.788909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.800880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.800911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.812906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.812943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.316 [2024-11-29 12:56:32.824893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.316 [2024-11-29 12:56:32.824940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.836925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.836965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.848912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.848937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.860919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.860948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.872911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.872934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.884916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.884939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.896921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.896945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.908948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.908979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.912561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.576 [2024-11-29 12:56:32.916921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.916944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.928935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.928961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.936930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.936952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.944921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.944953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.956945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.956975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.964936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.964961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.972934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.972957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.976018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.576 [2024-11-29 12:56:32.980942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.576 [2024-11-29 12:56:32.980965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.576 [2024-11-29 12:56:32.988939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:32.988964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.000956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.000987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.012949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.012977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.024951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.024975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.036953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.036979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.038043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.577 [2024-11-29 12:56:33.044950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.044974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.052954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.052977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.064959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.064983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.072960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.072983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.577 [2024-11-29 12:56:33.084985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.577 [2024-11-29 12:56:33.085009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.093025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.093058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.101041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.101069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.109046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.109074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.117060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.117090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.125069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.125099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.133070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.133114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.141085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.141125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.149080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.149128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.157076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.157101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 Running I/O for 5 seconds... 00:10:01.836 [2024-11-29 12:56:33.165086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.165109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.178927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.178957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.193870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.193912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.209117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.209164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.226207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.226246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.242877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.242946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.259552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.259584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.276738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.276769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.292461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.292492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.303735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.303781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.320786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.320826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.335533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.335579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.836 [2024-11-29 12:56:33.345257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.836 [2024-11-29 12:56:33.345294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.357217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.357252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.368152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.368187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.384753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.384785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.404111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.404142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.419246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.419293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.436039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.436088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.453261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.453293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.463680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.463712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.475977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.476008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.490664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.490696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.506308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.506339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.515738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.515769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.531414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.531477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.541167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.541201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.556734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.556768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.573436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.573474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.583343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.583373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.593906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.593934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.096 [2024-11-29 12:56:33.606372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.096 [2024-11-29 12:56:33.606402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.618171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.618200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.634382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.634427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.651324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.651360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.661797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.661832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.673320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.673352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.684870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.684916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.700792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.700831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.716201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.356 [2024-11-29 12:56:33.716246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.356 [2024-11-29 12:56:33.725314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.725351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.738847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.738896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.749971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.750001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.762948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.762976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.779645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.779677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.795931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.795976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.812204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.812234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.821723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.821753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.834813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.834843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.357 [2024-11-29 12:56:33.850677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.357 [2024-11-29 12:56:33.850709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.869734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.869764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.880351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.880380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.890989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.891017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.903233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.903263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.922602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.922632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.936689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.936719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.945849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.945890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.957258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.957287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.970540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.970571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:33.987526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:33.987559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:34.005262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:34.005293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:34.015061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:34.015089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:34.028869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:34.028910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.616 [2024-11-29 12:56:34.038564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.616 [2024-11-29 12:56:34.038594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.053042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.053090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.071327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.071382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.081226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.081264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.091385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.091420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.101958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.101993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.117096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.117136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.617 [2024-11-29 12:56:34.126636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.617 [2024-11-29 12:56:34.126669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.142846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.142906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.152578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.152606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.163404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.163445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 11881.00 IOPS, 92.82 MiB/s [2024-11-29T12:56:34.391Z] [2024-11-29 12:56:34.175357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.175387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.190738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.190768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.209075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.209105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.219321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.219350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.229393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.229421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.239566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.239595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.254413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.254444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.270533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.270570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.281363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.281411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.293171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.293202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.307416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.307471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.322536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.322566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.341285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.341315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.356037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.356066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.365178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.365207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.876 [2024-11-29 12:56:34.380819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.876 [2024-11-29 12:56:34.380853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.391903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.391944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.400336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.400365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.411409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.411446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.424216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.424246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.442177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.442209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.456856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.456898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.466190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.466219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.477893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.477933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.488418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.488447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.498733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.498761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.511123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.511152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.520041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.520070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.536587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.536617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.554161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.554194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.565537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.565582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.582312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.582345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.598220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.598249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.614995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.615024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.136 [2024-11-29 12:56:34.631678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.136 [2024-11-29 12:56:34.631708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.395 [2024-11-29 12:56:34.649497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.395 [2024-11-29 12:56:34.649542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.395 [2024-11-29 12:56:34.659346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.395 [2024-11-29 12:56:34.659374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.395 [2024-11-29 12:56:34.669066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.395 [2024-11-29 12:56:34.669109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.395 [2024-11-29 12:56:34.679109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.395 [2024-11-29 12:56:34.679138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.395 [2024-11-29 12:56:34.690183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.690243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.703376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.703405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.719398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.719428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.737634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.737664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.748466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.748495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.766159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.766187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.781475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.781511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.792000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.792029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.803611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.803641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.818608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.818637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.834414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.834445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.851946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.852011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.867603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.867636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.876820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.876850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.892066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.892095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.396 [2024-11-29 12:56:34.907662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.396 [2024-11-29 12:56:34.907693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:34.925299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:34.925328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:34.942302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:34.942332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:34.959185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:34.959214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:34.975410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:34.975462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:34.992145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:34.992175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.009470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.009498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.024890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.024918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.040649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.040678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.050268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.050296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.060560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.060588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.072145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.072173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.083284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.083313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.094672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.094705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.106243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.106273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.121702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.121730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.131234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.131263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.655 [2024-11-29 12:56:35.144201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.655 [2024-11-29 12:56:35.144229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.656 [2024-11-29 12:56:35.153881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.656 [2024-11-29 12:56:35.153938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.656 [2024-11-29 12:56:35.163402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.656 [2024-11-29 12:56:35.163431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 11975.50 IOPS, 93.56 MiB/s [2024-11-29T12:56:35.429Z] [2024-11-29 12:56:35.173553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.173580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.183537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.183566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.193515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.193543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.203279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.203307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.212958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.212986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.222600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.222628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.232874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.232912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.246331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.246360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.255939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.255981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.269656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.269684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.277771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.277798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.289728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.289756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.301333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.301361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.317984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.318013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.334938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.334969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.351000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.351045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.368493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.368522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.385202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.385231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.403161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.403192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.413834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.413863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.914 [2024-11-29 12:56:35.424376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.914 [2024-11-29 12:56:35.424404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.171 [2024-11-29 12:56:35.436082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.171 [2024-11-29 12:56:35.436109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.171 [2024-11-29 12:56:35.450772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.171 [2024-11-29 12:56:35.450802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.171 [2024-11-29 12:56:35.460078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.171 [2024-11-29 12:56:35.460105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.171 [2024-11-29 12:56:35.472482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.472510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.489222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.489251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.505017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.505045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.514229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.514258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.524425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.524454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.535678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.535710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.548528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.548558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.566359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.566390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.580983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.581012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.590051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.590079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.601648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.601678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.612501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.612533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.623959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.624005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.639808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.639840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.655465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.655497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.665571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.665601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.172 [2024-11-29 12:56:35.676519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.172 [2024-11-29 12:56:35.676548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.688518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.688548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.697710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.697742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.709820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.709851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.720967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.721008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.733815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.733845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.743363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.743392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.755163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.755193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.764637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.764665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.430 [2024-11-29 12:56:35.778843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.430 [2024-11-29 12:56:35.778872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.794173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.794202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.812530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.812561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.823893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.823940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.837027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.837055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.851475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.851506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.867088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.867120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.876662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.876693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.889294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.889325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.898912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.898940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.909420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.909449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.921838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.921868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.431 [2024-11-29 12:56:35.939570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.431 [2024-11-29 12:56:35.939601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:35.956252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:35.956281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:35.965968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:35.965996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:35.976137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:35.976165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:35.985942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:35.985983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:35.997381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:35.997409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.008386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.008416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.019056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.019101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.030365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.030396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.040600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.040628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.050692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.050722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.064534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.064564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.073114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.073142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.084845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.084874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.096448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.096477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.113779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.113837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.129405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.129462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.139661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.139707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.151747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.151792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 12054.00 IOPS, 94.17 MiB/s [2024-11-29T12:56:36.206Z] [2024-11-29 12:56:36.166626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.166660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.175699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.175744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.691 [2024-11-29 12:56:36.187983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.691 [2024-11-29 12:56:36.188011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.204102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.204131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.215507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.215537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.232093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.232121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.241649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.241677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.251683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.251713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.261696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.261740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.271441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.271488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.281105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.281132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.290964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.290992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.300865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.300903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.310737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.310765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.320662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.320690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.330640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.330669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.340444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.340475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.357445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.357474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.374837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.014 [2024-11-29 12:56:36.374867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.014 [2024-11-29 12:56:36.390331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.390360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.399914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.399952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.412708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.412737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.428999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.429036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.445823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.445856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.461427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.461457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.469787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.469816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.482169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.482199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.015 [2024-11-29 12:56:36.497593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.015 [2024-11-29 12:56:36.497623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.515341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.515387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.524280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.524309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.538726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.538755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.548034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.548065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.562426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.562455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.571367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.571394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.585073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.585101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.593885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.593925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.608511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.608541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.618117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.618184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.633428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.633457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.650032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.650063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.666873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.666916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.684199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.684227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.693971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.693999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.709739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.709769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.725566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.725597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.735587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.735618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.750757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.750788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.761314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.761343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.771905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.771946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.788808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.788856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.293 [2024-11-29 12:56:36.805795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.293 [2024-11-29 12:56:36.805837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.552 [2024-11-29 12:56:36.815046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.552 [2024-11-29 12:56:36.815081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.552 [2024-11-29 12:56:36.829902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.552 [2024-11-29 12:56:36.829969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.552 [2024-11-29 12:56:36.847386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.552 [2024-11-29 12:56:36.847428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.552 [2024-11-29 12:56:36.857949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.552 [2024-11-29 12:56:36.857991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.868911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.868955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.883636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.883666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.900237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.900267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.909646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.909675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.924554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.924587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.940114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.940163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.950067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.950096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.961078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.961122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.971009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.971037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.981054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.981082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:36.991292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:36.991320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:37.001848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:37.001888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:37.015526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:37.015556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:37.029883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:37.029922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:37.046007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:37.046036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.553 [2024-11-29 12:56:37.055422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.553 [2024-11-29 12:56:37.055474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.069517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.069549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.080732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.080759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.089426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.089455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.103311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.103341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.112353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.112382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.126887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.126926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.138486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.138514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.146810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.146838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.162025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.162057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 12123.50 IOPS, 94.71 MiB/s [2024-11-29T12:56:37.327Z] [2024-11-29 12:56:37.181401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.181437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.196270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.196321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.214056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.214150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.224950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.224979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.240079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.240125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.255417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.255470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.264491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.264520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.278004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.278035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.287555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.287584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.298269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.298298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.309873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.812 [2024-11-29 12:56:37.309912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.812 [2024-11-29 12:56:37.319385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.813 [2024-11-29 12:56:37.319414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.333273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.333302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.351621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.351653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.365918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.365962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.375605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.375635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.388089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.388118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.406501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.406530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.420667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.420695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.429483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.429511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.071 [2024-11-29 12:56:37.440631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.071 [2024-11-29 12:56:37.440659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.451948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.451975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.467924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.467952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.483877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.483921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.492861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.492899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.504639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.504669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.515796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.515825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.529601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.529650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.539314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.539343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.553188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.553217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.562978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.563022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.072 [2024-11-29 12:56:37.577100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.072 [2024-11-29 12:56:37.577128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.589162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.589192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.604180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.604209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.613408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.613437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.624194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.624224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.634972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.635011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.647455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.647486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.656462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.656493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.669501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.669535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.679238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.679271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.689338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.689373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.699869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.699913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.712197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.712225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.721901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.721927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.736642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.736671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.746626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.746677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.762455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.762536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.778620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.778650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.788791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.788819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.799944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.799976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.811778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.811810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.331 [2024-11-29 12:56:37.828821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.331 [2024-11-29 12:56:37.828853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.846413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.846444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.861664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.861696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.872620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.872649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.885051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.885079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.896188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.896217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.907549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.907580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.923266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.923296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.941769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.941834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.952386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.952415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.964221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.964250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.979595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.979624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:37.998179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:37.998210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.008321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.008352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.018641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.018671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.030593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.030621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.039683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.039713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.050579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.050606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.063004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.063033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.073888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.073942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.591 [2024-11-29 12:56:38.090548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.591 [2024-11-29 12:56:38.090596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.106634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.106696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.116173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.116212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.126782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.126812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.137309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.137338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.150827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.150855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.160281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.160309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 12110.20 IOPS, 94.61 MiB/s [2024-11-29T12:56:38.367Z] [2024-11-29 12:56:38.170010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.170037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 00:10:06.852 Latency(us) 00:10:06.852 [2024-11-29T12:56:38.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.852 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:06.852 Nvme1n1 : 5.01 12117.27 94.67 0.00 0.00 10551.50 4230.05 20256.58 00:10:06.852 [2024-11-29T12:56:38.367Z] =================================================================================================================== 00:10:06.852 [2024-11-29T12:56:38.367Z] Total : 12117.27 94.67 0.00 0.00 10551.50 4230.05 20256.58 00:10:06.852 [2024-11-29 12:56:38.178002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.178038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.186001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.186027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.194001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.194024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.206009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.206048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.218014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.218037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.226013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.226035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.234025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.234048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.242021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.242043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.250022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.250045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.258040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.258063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.266025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.266047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.274027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.274050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.282028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.282066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.294040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.294067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.306035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.306058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.318074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.318121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.330047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.330072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.852 [2024-11-29 12:56:38.342042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.852 [2024-11-29 12:56:38.342064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.853 [2024-11-29 12:56:38.350044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.853 [2024-11-29 12:56:38.350065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.853 [2024-11-29 12:56:38.362050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.853 [2024-11-29 12:56:38.362072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.112 [2024-11-29 12:56:38.370052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.112 [2024-11-29 12:56:38.370075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.112 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65540) - No such process 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65540 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.112 delay0 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.112 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:07.112 [2024-11-29 12:56:38.576045] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:13.679 Initializing NVMe Controllers 00:10:13.679 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.679 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.679 Initialization complete. Launching workers. 00:10:13.679 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:10:13.679 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 394, failed to submit 33 00:10:13.679 success 280, unsuccessful 114, failed 0 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.679 rmmod nvme_tcp 00:10:13.679 rmmod nvme_fabrics 00:10:13.679 rmmod nvme_keyring 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65397 ']' 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65397 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65397 ']' 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65397 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65397 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:13.679 killing process with pid 65397 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65397' 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65397 00:10:13.679 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65397 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:13.679 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:13.937 00:10:13.937 real 0m24.514s 00:10:13.937 user 0m39.170s 00:10:13.937 sys 0m7.503s 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.937 ************************************ 00:10:13.937 END TEST nvmf_zcopy 00:10:13.937 ************************************ 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.937 ************************************ 00:10:13.937 START TEST nvmf_nmic 00:10:13.937 ************************************ 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.937 * Looking for test storage... 00:10:13.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.937 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.196 --rc genhtml_branch_coverage=1 00:10:14.196 --rc genhtml_function_coverage=1 00:10:14.196 --rc genhtml_legend=1 00:10:14.196 --rc geninfo_all_blocks=1 00:10:14.196 --rc geninfo_unexecuted_blocks=1 00:10:14.196 00:10:14.196 ' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.196 --rc genhtml_branch_coverage=1 00:10:14.196 --rc genhtml_function_coverage=1 00:10:14.196 --rc genhtml_legend=1 00:10:14.196 --rc geninfo_all_blocks=1 00:10:14.196 --rc geninfo_unexecuted_blocks=1 00:10:14.196 00:10:14.196 ' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.196 --rc genhtml_branch_coverage=1 00:10:14.196 --rc genhtml_function_coverage=1 00:10:14.196 --rc genhtml_legend=1 00:10:14.196 --rc geninfo_all_blocks=1 00:10:14.196 --rc geninfo_unexecuted_blocks=1 00:10:14.196 00:10:14.196 ' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.196 --rc genhtml_branch_coverage=1 00:10:14.196 --rc genhtml_function_coverage=1 00:10:14.196 --rc genhtml_legend=1 00:10:14.196 --rc geninfo_all_blocks=1 00:10:14.196 --rc geninfo_unexecuted_blocks=1 00:10:14.196 00:10:14.196 ' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.196 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:14.197 Cannot find device "nvmf_init_br" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:14.197 Cannot find device "nvmf_init_br2" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:14.197 Cannot find device "nvmf_tgt_br" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.197 Cannot find device "nvmf_tgt_br2" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:14.197 Cannot find device "nvmf_init_br" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:14.197 Cannot find device "nvmf_init_br2" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:14.197 Cannot find device "nvmf_tgt_br" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:14.197 Cannot find device "nvmf_tgt_br2" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:14.197 Cannot find device "nvmf_br" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:14.197 Cannot find device "nvmf_init_if" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:14.197 Cannot find device "nvmf_init_if2" 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.197 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:14.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:14.457 00:10:14.457 --- 10.0.0.3 ping statistics --- 00:10:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.457 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:14.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:14.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:14.457 00:10:14.457 --- 10.0.0.4 ping statistics --- 00:10:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.457 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:14.457 00:10:14.457 --- 10.0.0.1 ping statistics --- 00:10:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.457 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:14.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:14.457 00:10:14.457 --- 10.0.0.2 ping statistics --- 00:10:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.457 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65919 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65919 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65919 ']' 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.457 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.716 [2024-11-29 12:56:46.009662] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:10:14.716 [2024-11-29 12:56:46.009758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.716 [2024-11-29 12:56:46.164229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.976 [2024-11-29 12:56:46.247582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.976 [2024-11-29 12:56:46.247853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.976 [2024-11-29 12:56:46.247939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.976 [2024-11-29 12:56:46.248022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.976 [2024-11-29 12:56:46.248095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.976 [2024-11-29 12:56:46.249494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.976 [2024-11-29 12:56:46.251978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.976 [2024-11-29 12:56:46.252098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.976 [2024-11-29 12:56:46.252170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.976 [2024-11-29 12:56:46.324428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.544 12:56:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.544 12:56:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:15.544 12:56:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.544 12:56:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.544 12:56:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 [2024-11-29 12:56:47.033802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 Malloc0 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 [2024-11-29 12:56:47.105134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 test case1: single bdev can't be used in multiple subsystems 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:15.804 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 [2024-11-29 12:56:47.128905] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:15.805 [2024-11-29 12:56:47.128945] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:15.805 [2024-11-29 12:56:47.128958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.805 request: 00:10:15.805 { 00:10:15.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:15.805 "namespace": { 00:10:15.805 "bdev_name": "Malloc0", 00:10:15.805 "no_auto_visible": false, 00:10:15.805 "hide_metadata": false 00:10:15.805 }, 00:10:15.805 "method": "nvmf_subsystem_add_ns", 00:10:15.805 "req_id": 1 00:10:15.805 } 00:10:15.805 Got JSON-RPC error response 00:10:15.805 response: 00:10:15.805 { 00:10:15.805 "code": -32602, 00:10:15.805 "message": "Invalid parameters" 00:10:15.805 } 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:15.805 Adding namespace failed - expected result. 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:15.805 test case2: host connect to nvmf target in multiple paths 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 [2024-11-29 12:56:47.141022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:15.805 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:16.064 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.064 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:16.064 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.064 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:16.064 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:17.969 12:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:17.969 [global] 00:10:17.969 thread=1 00:10:17.969 invalidate=1 00:10:17.969 rw=write 00:10:17.969 time_based=1 00:10:17.969 runtime=1 00:10:17.969 ioengine=libaio 00:10:17.969 direct=1 00:10:17.969 bs=4096 00:10:17.969 iodepth=1 00:10:17.969 norandommap=0 00:10:17.969 numjobs=1 00:10:17.969 00:10:17.969 verify_dump=1 00:10:17.969 verify_backlog=512 00:10:17.969 verify_state_save=0 00:10:17.969 do_verify=1 00:10:17.969 verify=crc32c-intel 00:10:17.969 [job0] 00:10:17.969 filename=/dev/nvme0n1 00:10:17.969 Could not set queue depth (nvme0n1) 00:10:18.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.229 fio-3.35 00:10:18.229 Starting 1 thread 00:10:19.605 00:10:19.605 job0: (groupid=0, jobs=1): err= 0: pid=66015: Fri Nov 29 12:56:50 2024 00:10:19.605 read: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:10:19.605 slat (nsec): min=11923, max=52804, avg=14225.07, stdev=3334.39 00:10:19.605 clat (usec): min=139, max=854, avg=201.02, stdev=26.46 00:10:19.605 lat (usec): min=154, max=866, avg=215.25, stdev=26.57 00:10:19.605 clat percentiles (usec): 00:10:19.605 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:10:19.605 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:19.605 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:10:19.605 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 404], 99.95th=[ 461], 00:10:19.605 | 99.99th=[ 857] 00:10:19.605 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:19.605 slat (usec): min=13, max=122, avg=19.91, stdev= 4.58 00:10:19.605 clat (usec): min=87, max=600, avg=120.84, stdev=21.09 00:10:19.605 lat (usec): min=105, max=626, avg=140.75, stdev=21.95 00:10:19.605 clat percentiles (usec): 00:10:19.605 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 102], 20.00th=[ 106], 00:10:19.605 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 124], 00:10:19.605 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:10:19.605 | 99.00th=[ 165], 99.50th=[ 200], 99.90th=[ 326], 99.95th=[ 457], 00:10:19.605 | 99.99th=[ 603] 00:10:19.605 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:19.606 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:19.606 lat (usec) : 100=3.88%, 250=95.38%, 500=0.71%, 750=0.02%, 1000=0.02% 00:10:19.606 cpu : usr=2.10%, sys=7.70%, ctx=5675, majf=0, minf=5 00:10:19.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.606 issued rwts: total=2597,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.606 00:10:19.606 Run status group 0 (all jobs): 00:10:19.606 READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=10.1MiB (10.6MB), run=1001-1001msec 00:10:19.606 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:19.606 00:10:19.606 Disk stats (read/write): 00:10:19.606 nvme0n1: ios=2515/2560, merge=0/0, ticks=538/331, in_queue=869, util=91.88% 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.606 rmmod nvme_tcp 00:10:19.606 rmmod nvme_fabrics 00:10:19.606 rmmod nvme_keyring 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65919 ']' 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65919 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65919 ']' 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65919 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65919 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.606 killing process with pid 65919 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65919' 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65919 00:10:19.606 12:56:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65919 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.902 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:20.160 00:10:20.160 real 0m6.082s 00:10:20.160 user 0m18.773s 00:10:20.160 sys 0m2.126s 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.160 ************************************ 00:10:20.160 END TEST nvmf_nmic 00:10:20.160 ************************************ 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.160 ************************************ 00:10:20.160 START TEST nvmf_fio_target 00:10:20.160 ************************************ 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.160 * Looking for test storage... 00:10:20.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.160 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.160 --rc genhtml_branch_coverage=1 00:10:20.160 --rc genhtml_function_coverage=1 00:10:20.161 --rc genhtml_legend=1 00:10:20.161 --rc geninfo_all_blocks=1 00:10:20.161 --rc geninfo_unexecuted_blocks=1 00:10:20.161 00:10:20.161 ' 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.161 --rc genhtml_branch_coverage=1 00:10:20.161 --rc genhtml_function_coverage=1 00:10:20.161 --rc genhtml_legend=1 00:10:20.161 --rc geninfo_all_blocks=1 00:10:20.161 --rc geninfo_unexecuted_blocks=1 00:10:20.161 00:10:20.161 ' 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.161 --rc genhtml_branch_coverage=1 00:10:20.161 --rc genhtml_function_coverage=1 00:10:20.161 --rc genhtml_legend=1 00:10:20.161 --rc geninfo_all_blocks=1 00:10:20.161 --rc geninfo_unexecuted_blocks=1 00:10:20.161 00:10:20.161 ' 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.161 --rc genhtml_branch_coverage=1 00:10:20.161 --rc genhtml_function_coverage=1 00:10:20.161 --rc genhtml_legend=1 00:10:20.161 --rc geninfo_all_blocks=1 00:10:20.161 --rc geninfo_unexecuted_blocks=1 00:10:20.161 00:10:20.161 ' 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.161 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.420 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:20.421 Cannot find device "nvmf_init_br" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:20.421 Cannot find device "nvmf_init_br2" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:20.421 Cannot find device "nvmf_tgt_br" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.421 Cannot find device "nvmf_tgt_br2" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:20.421 Cannot find device "nvmf_init_br" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:20.421 Cannot find device "nvmf_init_br2" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:20.421 Cannot find device "nvmf_tgt_br" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:20.421 Cannot find device "nvmf_tgt_br2" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:20.421 Cannot find device "nvmf_br" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:20.421 Cannot find device "nvmf_init_if" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:20.421 Cannot find device "nvmf_init_if2" 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:20.421 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:20.679 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:20.679 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.679 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.680 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.680 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:20.680 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:20.680 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.680 12:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:20.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:20.680 00:10:20.680 --- 10.0.0.3 ping statistics --- 00:10:20.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.680 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:20.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:20.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:10:20.680 00:10:20.680 --- 10.0.0.4 ping statistics --- 00:10:20.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.680 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:20.680 00:10:20.680 --- 10.0.0.1 ping statistics --- 00:10:20.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.680 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:20.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:20.680 00:10:20.680 --- 10.0.0.2 ping statistics --- 00:10:20.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.680 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66245 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66245 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66245 ']' 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.680 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 [2024-11-29 12:56:52.165386] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:10:20.680 [2024-11-29 12:56:52.165489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.938 [2024-11-29 12:56:52.313742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.938 [2024-11-29 12:56:52.377644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.938 [2024-11-29 12:56:52.377957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.938 [2024-11-29 12:56:52.378144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.938 [2024-11-29 12:56:52.378376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.938 [2024-11-29 12:56:52.378541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.938 [2024-11-29 12:56:52.379773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.938 [2024-11-29 12:56:52.379953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.938 [2024-11-29 12:56:52.380545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.938 [2024-11-29 12:56:52.380560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.197 [2024-11-29 12:56:52.463531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.197 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.455 [2024-11-29 12:56:52.869315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.455 12:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.020 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.020 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.278 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:22.279 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.538 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:22.538 12:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.797 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:22.797 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.055 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.315 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:23.315 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.574 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:23.575 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.140 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:24.140 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.140 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.399 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.399 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.966 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.966 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.966 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:25.533 [2024-11-29 12:56:56.745669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.533 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.533 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.791 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:26.049 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:26.049 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:26.049 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.049 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:26.050 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:26.050 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:27.950 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:27.950 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:27.951 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.951 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:27.951 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.951 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:27.951 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:27.951 [global] 00:10:27.951 thread=1 00:10:27.951 invalidate=1 00:10:27.951 rw=write 00:10:27.951 time_based=1 00:10:27.951 runtime=1 00:10:27.951 ioengine=libaio 00:10:27.951 direct=1 00:10:27.951 bs=4096 00:10:27.951 iodepth=1 00:10:27.951 norandommap=0 00:10:27.951 numjobs=1 00:10:27.951 00:10:27.951 verify_dump=1 00:10:27.951 verify_backlog=512 00:10:27.951 verify_state_save=0 00:10:27.951 do_verify=1 00:10:27.951 verify=crc32c-intel 00:10:27.951 [job0] 00:10:27.951 filename=/dev/nvme0n1 00:10:27.951 [job1] 00:10:27.951 filename=/dev/nvme0n2 00:10:27.951 [job2] 00:10:27.951 filename=/dev/nvme0n3 00:10:27.951 [job3] 00:10:27.951 filename=/dev/nvme0n4 00:10:28.209 Could not set queue depth (nvme0n1) 00:10:28.209 Could not set queue depth (nvme0n2) 00:10:28.209 Could not set queue depth (nvme0n3) 00:10:28.209 Could not set queue depth (nvme0n4) 00:10:28.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.209 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.209 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.209 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.209 fio-3.35 00:10:28.209 Starting 4 threads 00:10:29.656 00:10:29.656 job0: (groupid=0, jobs=1): err= 0: pid=66429: Fri Nov 29 12:57:00 2024 00:10:29.656 read: IOPS=1595, BW=6382KiB/s (6535kB/s)(6388KiB/1001msec) 00:10:29.656 slat (nsec): min=11458, max=50793, avg=16647.72, stdev=4815.42 00:10:29.656 clat (usec): min=202, max=607, avg=304.24, stdev=90.31 00:10:29.656 lat (usec): min=216, max=627, avg=320.89, stdev=91.20 00:10:29.656 clat percentiles (usec): 00:10:29.656 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 249], 00:10:29.656 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:10:29.656 | 70.00th=[ 285], 80.00th=[ 359], 90.00th=[ 498], 95.00th=[ 519], 00:10:29.656 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 611], 00:10:29.656 | 99.99th=[ 611] 00:10:29.656 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:29.656 slat (usec): min=13, max=107, avg=26.24, stdev= 8.48 00:10:29.656 clat (usec): min=119, max=580, avg=208.54, stdev=45.69 00:10:29.656 lat (usec): min=141, max=601, avg=234.78, stdev=48.55 00:10:29.656 clat percentiles (usec): 00:10:29.656 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:10:29.656 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 206], 00:10:29.656 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 265], 95.00th=[ 306], 00:10:29.656 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 478], 99.95th=[ 490], 00:10:29.656 | 99.99th=[ 578] 00:10:29.656 bw ( KiB/s): min= 8192, max= 8192, per=28.63%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.656 lat (usec) : 250=58.93%, 500=37.06%, 750=4.01% 00:10:29.656 cpu : usr=1.20%, sys=6.60%, ctx=3647, majf=0, minf=3 00:10:29.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.656 issued rwts: total=1597,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.656 job1: (groupid=0, jobs=1): err= 0: pid=66430: Fri Nov 29 12:57:00 2024 00:10:29.656 read: IOPS=1499, BW=5998KiB/s (6142kB/s)(6004KiB/1001msec) 00:10:29.656 slat (nsec): min=14585, max=80026, avg=24000.71, stdev=7166.06 00:10:29.656 clat (usec): min=181, max=764, avg=367.80, stdev=99.58 00:10:29.656 lat (usec): min=196, max=786, avg=391.80, stdev=102.90 00:10:29.656 clat percentiles (usec): 00:10:29.656 | 1.00th=[ 221], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 306], 00:10:29.656 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 351], 00:10:29.656 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 553], 95.00th=[ 611], 00:10:29.656 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 758], 99.95th=[ 766], 00:10:29.656 | 99.99th=[ 766] 00:10:29.656 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:29.656 slat (usec): min=19, max=107, avg=30.19, stdev= 5.35 00:10:29.656 clat (usec): min=119, max=1935, avg=232.45, stdev=69.79 00:10:29.656 lat (usec): min=146, max=1983, avg=262.64, stdev=70.47 00:10:29.656 clat percentiles (usec): 00:10:29.656 | 1.00th=[ 133], 5.00th=[ 149], 10.00th=[ 163], 20.00th=[ 184], 00:10:29.656 | 30.00th=[ 200], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 247], 00:10:29.656 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 318], 00:10:29.656 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 848], 99.95th=[ 1942], 00:10:29.656 | 99.99th=[ 1942] 00:10:29.657 bw ( KiB/s): min= 8192, max= 8192, per=28.63%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.657 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.657 lat (usec) : 250=32.76%, 500=61.15%, 750=5.96%, 1000=0.10% 00:10:29.657 lat (msec) : 2=0.03% 00:10:29.657 cpu : usr=2.10%, sys=6.40%, ctx=3043, majf=0, minf=5 00:10:29.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 issued rwts: total=1501,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.657 job2: (groupid=0, jobs=1): err= 0: pid=66432: Fri Nov 29 12:57:00 2024 00:10:29.657 read: IOPS=1947, BW=7788KiB/s (7975kB/s)(7804KiB/1002msec) 00:10:29.657 slat (nsec): min=11582, max=44913, avg=16408.99, stdev=4269.60 00:10:29.657 clat (usec): min=160, max=4994, avg=271.64, stdev=220.79 00:10:29.657 lat (usec): min=175, max=5016, avg=288.05, stdev=222.07 00:10:29.657 clat percentiles (usec): 00:10:29.657 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 194], 00:10:29.657 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 227], 00:10:29.657 | 70.00th=[ 243], 80.00th=[ 355], 90.00th=[ 469], 95.00th=[ 506], 00:10:29.657 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 4948], 99.95th=[ 5014], 00:10:29.657 | 99.99th=[ 5014] 00:10:29.657 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:10:29.657 slat (nsec): min=13084, max=80219, avg=22884.54, stdev=6007.17 00:10:29.657 clat (usec): min=111, max=4988, avg=187.38, stdev=127.87 00:10:29.657 lat (usec): min=128, max=5030, avg=210.27, stdev=129.16 00:10:29.657 clat percentiles (usec): 00:10:29.657 | 1.00th=[ 121], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 147], 00:10:29.657 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 180], 00:10:29.657 | 70.00th=[ 196], 80.00th=[ 223], 90.00th=[ 251], 95.00th=[ 277], 00:10:29.657 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 1827], 99.95th=[ 1827], 00:10:29.657 | 99.99th=[ 5014] 00:10:29.657 bw ( KiB/s): min=11256, max=11256, per=39.34%, avg=11256.00, stdev= 0.00, samples=1 00:10:29.657 iops : min= 2814, max= 2814, avg=2814.00, stdev= 0.00, samples=1 00:10:29.657 lat (usec) : 250=81.17%, 500=15.68%, 750=2.88%, 1000=0.05% 00:10:29.657 lat (msec) : 2=0.05%, 4=0.10%, 10=0.08% 00:10:29.657 cpu : usr=1.70%, sys=6.39%, ctx=4003, majf=0, minf=13 00:10:29.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 issued rwts: total=1951,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.657 job3: (groupid=0, jobs=1): err= 0: pid=66437: Fri Nov 29 12:57:00 2024 00:10:29.657 read: IOPS=1285, BW=5143KiB/s (5266kB/s)(5148KiB/1001msec) 00:10:29.657 slat (nsec): min=14907, max=93014, avg=24291.40, stdev=8866.80 00:10:29.657 clat (usec): min=212, max=2954, avg=373.74, stdev=125.94 00:10:29.657 lat (usec): min=231, max=2972, avg=398.03, stdev=129.24 00:10:29.657 clat percentiles (usec): 00:10:29.657 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:10:29.657 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:10:29.657 | 70.00th=[ 375], 80.00th=[ 424], 90.00th=[ 494], 95.00th=[ 537], 00:10:29.657 | 99.00th=[ 660], 99.50th=[ 742], 99.90th=[ 2573], 99.95th=[ 2966], 00:10:29.657 | 99.99th=[ 2966] 00:10:29.657 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:29.657 slat (usec): min=20, max=273, avg=35.36, stdev=12.93 00:10:29.657 clat (usec): min=116, max=1951, avg=276.80, stdev=106.84 00:10:29.657 lat (usec): min=140, max=2006, avg=312.16, stdev=114.43 00:10:29.657 clat percentiles (usec): 00:10:29.657 | 1.00th=[ 135], 5.00th=[ 153], 10.00th=[ 169], 20.00th=[ 188], 00:10:29.657 | 30.00th=[ 219], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 269], 00:10:29.657 | 70.00th=[ 297], 80.00th=[ 367], 90.00th=[ 437], 95.00th=[ 469], 00:10:29.657 | 99.00th=[ 515], 99.50th=[ 553], 99.90th=[ 644], 99.95th=[ 1958], 00:10:29.657 | 99.99th=[ 1958] 00:10:29.657 bw ( KiB/s): min= 7240, max= 7240, per=25.30%, avg=7240.00, stdev= 0.00, samples=1 00:10:29.657 iops : min= 1810, max= 1810, avg=1810.00, stdev= 0.00, samples=1 00:10:29.657 lat (usec) : 250=26.96%, 500=67.94%, 750=4.85%, 1000=0.11% 00:10:29.657 lat (msec) : 2=0.07%, 4=0.07% 00:10:29.657 cpu : usr=2.00%, sys=6.70%, ctx=2830, majf=0, minf=15 00:10:29.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.657 issued rwts: total=1287,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.657 00:10:29.657 Run status group 0 (all jobs): 00:10:29.657 READ: bw=24.7MiB/s (25.9MB/s), 5143KiB/s-7788KiB/s (5266kB/s-7975kB/s), io=24.8MiB (26.0MB), run=1001-1002msec 00:10:29.657 WRITE: bw=27.9MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1002msec 00:10:29.657 00:10:29.657 Disk stats (read/write): 00:10:29.657 nvme0n1: ios=1585/1575, merge=0/0, ticks=496/321, in_queue=817, util=86.24% 00:10:29.657 nvme0n2: ios=1152/1536, merge=0/0, ticks=422/354, in_queue=776, util=86.53% 00:10:29.657 nvme0n3: ios=1618/2048, merge=0/0, ticks=375/385, in_queue=760, util=87.97% 00:10:29.657 nvme0n4: ios=1024/1279, merge=0/0, ticks=401/382, in_queue=783, util=89.56% 00:10:29.657 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:29.657 [global] 00:10:29.657 thread=1 00:10:29.657 invalidate=1 00:10:29.657 rw=randwrite 00:10:29.657 time_based=1 00:10:29.657 runtime=1 00:10:29.657 ioengine=libaio 00:10:29.657 direct=1 00:10:29.657 bs=4096 00:10:29.657 iodepth=1 00:10:29.657 norandommap=0 00:10:29.657 numjobs=1 00:10:29.657 00:10:29.657 verify_dump=1 00:10:29.657 verify_backlog=512 00:10:29.657 verify_state_save=0 00:10:29.657 do_verify=1 00:10:29.657 verify=crc32c-intel 00:10:29.657 [job0] 00:10:29.657 filename=/dev/nvme0n1 00:10:29.657 [job1] 00:10:29.657 filename=/dev/nvme0n2 00:10:29.657 [job2] 00:10:29.657 filename=/dev/nvme0n3 00:10:29.657 [job3] 00:10:29.657 filename=/dev/nvme0n4 00:10:29.657 Could not set queue depth (nvme0n1) 00:10:29.657 Could not set queue depth (nvme0n2) 00:10:29.657 Could not set queue depth (nvme0n3) 00:10:29.657 Could not set queue depth (nvme0n4) 00:10:29.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.657 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.657 fio-3.35 00:10:29.657 Starting 4 threads 00:10:31.034 00:10:31.034 job0: (groupid=0, jobs=1): err= 0: pid=66491: Fri Nov 29 12:57:02 2024 00:10:31.034 read: IOPS=1516, BW=6066KiB/s (6212kB/s)(6072KiB/1001msec) 00:10:31.034 slat (nsec): min=10311, max=49722, avg=17286.78, stdev=4475.18 00:10:31.034 clat (usec): min=145, max=2667, avg=379.14, stdev=166.24 00:10:31.034 lat (usec): min=158, max=2682, avg=396.42, stdev=168.16 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 200], 00:10:31.034 | 30.00th=[ 225], 40.00th=[ 355], 50.00th=[ 396], 60.00th=[ 433], 00:10:31.034 | 70.00th=[ 465], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 603], 00:10:31.034 | 99.00th=[ 660], 99.50th=[ 758], 99.90th=[ 2180], 99.95th=[ 2671], 00:10:31.034 | 99.99th=[ 2671] 00:10:31.034 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:31.034 slat (usec): min=13, max=14459, avg=33.40, stdev=369.00 00:10:31.034 clat (usec): min=4, max=778, avg=221.17, stdev=83.36 00:10:31.034 lat (usec): min=123, max=14463, avg=254.56, stdev=373.45 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 114], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 157], 00:10:31.034 | 30.00th=[ 174], 40.00th=[ 190], 50.00th=[ 206], 60.00th=[ 225], 00:10:31.034 | 70.00th=[ 239], 80.00th=[ 262], 90.00th=[ 318], 95.00th=[ 412], 00:10:31.034 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 693], 99.95th=[ 783], 00:10:31.034 | 99.99th=[ 783] 00:10:31.034 bw ( KiB/s): min= 8192, max= 8192, per=32.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.034 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.034 lat (usec) : 10=0.07%, 250=53.86%, 500=33.30%, 750=12.48%, 1000=0.23% 00:10:31.034 lat (msec) : 4=0.07% 00:10:31.034 cpu : usr=1.50%, sys=5.40%, ctx=3062, majf=0, minf=15 00:10:31.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 issued rwts: total=1518,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.034 job1: (groupid=0, jobs=1): err= 0: pid=66492: Fri Nov 29 12:57:02 2024 00:10:31.034 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:31.034 slat (nsec): min=14459, max=71646, avg=29215.32, stdev=9027.89 00:10:31.034 clat (usec): min=176, max=3414, avg=498.76, stdev=186.51 00:10:31.034 lat (usec): min=192, max=3438, avg=527.98, stdev=190.65 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 217], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 359], 00:10:31.034 | 30.00th=[ 392], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 490], 00:10:31.034 | 70.00th=[ 603], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 783], 00:10:31.034 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 1106], 99.95th=[ 3425], 00:10:31.034 | 99.99th=[ 3425] 00:10:31.034 write: IOPS=1509, BW=6038KiB/s (6183kB/s)(6044KiB/1001msec); 0 zone resets 00:10:31.034 slat (usec): min=18, max=116, avg=33.56, stdev= 8.20 00:10:31.034 clat (usec): min=115, max=1677, avg=264.48, stdev=89.09 00:10:31.034 lat (usec): min=138, max=1713, avg=298.04, stdev=90.63 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 131], 5.00th=[ 147], 10.00th=[ 159], 20.00th=[ 184], 00:10:31.034 | 30.00th=[ 208], 40.00th=[ 239], 50.00th=[ 265], 60.00th=[ 285], 00:10:31.034 | 70.00th=[ 302], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 400], 00:10:31.034 | 99.00th=[ 445], 99.50th=[ 494], 99.90th=[ 709], 99.95th=[ 1680], 00:10:31.034 | 99.99th=[ 1680] 00:10:31.034 bw ( KiB/s): min= 7008, max= 7008, per=27.39%, avg=7008.00, stdev= 0.00, samples=1 00:10:31.034 iops : min= 1752, max= 1752, avg=1752.00, stdev= 0.00, samples=1 00:10:31.034 lat (usec) : 250=27.14%, 500=56.77%, 750=12.82%, 1000=3.12% 00:10:31.034 lat (msec) : 2=0.12%, 4=0.04% 00:10:31.034 cpu : usr=2.10%, sys=6.60%, ctx=2537, majf=0, minf=5 00:10:31.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 issued rwts: total=1024,1511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.034 job2: (groupid=0, jobs=1): err= 0: pid=66493: Fri Nov 29 12:57:02 2024 00:10:31.034 read: IOPS=1882, BW=7528KiB/s (7709kB/s)(7536KiB/1001msec) 00:10:31.034 slat (usec): min=13, max=110, avg=15.74, stdev= 4.64 00:10:31.034 clat (usec): min=220, max=580, avg=273.58, stdev=20.70 00:10:31.034 lat (usec): min=239, max=650, avg=289.32, stdev=22.18 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:10:31.034 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:31.034 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:10:31.034 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 375], 99.95th=[ 578], 00:10:31.034 | 99.99th=[ 578] 00:10:31.034 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:31.034 slat (nsec): min=18628, max=93163, avg=23238.32, stdev=6357.31 00:10:31.034 clat (usec): min=128, max=705, avg=195.52, stdev=29.40 00:10:31.034 lat (usec): min=149, max=739, avg=218.76, stdev=31.69 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:10:31.034 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:10:31.034 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 237], 00:10:31.034 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 515], 99.95th=[ 578], 00:10:31.034 | 99.99th=[ 709] 00:10:31.034 bw ( KiB/s): min= 8192, max= 8192, per=32.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.034 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.034 lat (usec) : 250=55.39%, 500=44.48%, 750=0.13% 00:10:31.034 cpu : usr=1.70%, sys=5.90%, ctx=3932, majf=0, minf=13 00:10:31.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 issued rwts: total=1884,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.034 job3: (groupid=0, jobs=1): err= 0: pid=66494: Fri Nov 29 12:57:02 2024 00:10:31.034 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:31.034 slat (usec): min=10, max=134, avg=19.62, stdev= 6.57 00:10:31.034 clat (usec): min=222, max=2210, avg=448.51, stdev=96.65 00:10:31.034 lat (usec): min=240, max=2224, avg=468.13, stdev=96.51 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 388], 00:10:31.034 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 465], 00:10:31.034 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:10:31.034 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 930], 99.95th=[ 2212], 00:10:31.034 | 99.99th=[ 2212] 00:10:31.034 write: IOPS=1306, BW=5227KiB/s (5352kB/s)(5232KiB/1001msec); 0 zone resets 00:10:31.034 slat (usec): min=13, max=2325, avg=37.05, stdev=64.21 00:10:31.034 clat (usec): min=4, max=790, avg=355.50, stdev=106.54 00:10:31.034 lat (usec): min=174, max=2329, avg=392.55, stdev=123.38 00:10:31.034 clat percentiles (usec): 00:10:31.034 | 1.00th=[ 161], 5.00th=[ 204], 10.00th=[ 231], 20.00th=[ 260], 00:10:31.034 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 383], 00:10:31.034 | 70.00th=[ 420], 80.00th=[ 461], 90.00th=[ 510], 95.00th=[ 529], 00:10:31.034 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 775], 99.95th=[ 791], 00:10:31.034 | 99.99th=[ 791] 00:10:31.034 bw ( KiB/s): min= 5640, max= 5640, per=22.04%, avg=5640.00, stdev= 0.00, samples=1 00:10:31.034 iops : min= 1410, max= 1410, avg=1410.00, stdev= 0.00, samples=1 00:10:31.034 lat (usec) : 10=0.04%, 250=9.18%, 500=74.57%, 750=15.99%, 1000=0.17% 00:10:31.034 lat (msec) : 4=0.04% 00:10:31.034 cpu : usr=2.40%, sys=5.10%, ctx=2343, majf=0, minf=15 00:10:31.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.034 issued rwts: total=1024,1308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.034 00:10:31.034 Run status group 0 (all jobs): 00:10:31.034 READ: bw=21.3MiB/s (22.3MB/s), 4092KiB/s-7528KiB/s (4190kB/s-7709kB/s), io=21.3MiB (22.3MB), run=1001-1001msec 00:10:31.034 WRITE: bw=25.0MiB/s (26.2MB/s), 5227KiB/s-8184KiB/s (5352kB/s-8380kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:10:31.034 00:10:31.034 Disk stats (read/write): 00:10:31.034 nvme0n1: ios=1314/1536, merge=0/0, ticks=505/337, in_queue=842, util=90.08% 00:10:31.034 nvme0n2: ios=1073/1120, merge=0/0, ticks=524/288, in_queue=812, util=89.30% 00:10:31.034 nvme0n3: ios=1557/1916, merge=0/0, ticks=451/392, in_queue=843, util=89.76% 00:10:31.034 nvme0n4: ios=1051/1029, merge=0/0, ticks=496/348, in_queue=844, util=90.64% 00:10:31.034 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:31.034 [global] 00:10:31.034 thread=1 00:10:31.034 invalidate=1 00:10:31.034 rw=write 00:10:31.034 time_based=1 00:10:31.034 runtime=1 00:10:31.034 ioengine=libaio 00:10:31.034 direct=1 00:10:31.034 bs=4096 00:10:31.034 iodepth=128 00:10:31.034 norandommap=0 00:10:31.034 numjobs=1 00:10:31.034 00:10:31.034 verify_dump=1 00:10:31.034 verify_backlog=512 00:10:31.034 verify_state_save=0 00:10:31.034 do_verify=1 00:10:31.034 verify=crc32c-intel 00:10:31.034 [job0] 00:10:31.034 filename=/dev/nvme0n1 00:10:31.034 [job1] 00:10:31.034 filename=/dev/nvme0n2 00:10:31.034 [job2] 00:10:31.034 filename=/dev/nvme0n3 00:10:31.034 [job3] 00:10:31.034 filename=/dev/nvme0n4 00:10:31.034 Could not set queue depth (nvme0n1) 00:10:31.034 Could not set queue depth (nvme0n2) 00:10:31.034 Could not set queue depth (nvme0n3) 00:10:31.034 Could not set queue depth (nvme0n4) 00:10:31.034 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.034 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.034 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.034 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.034 fio-3.35 00:10:31.034 Starting 4 threads 00:10:32.425 00:10:32.425 job0: (groupid=0, jobs=1): err= 0: pid=66549: Fri Nov 29 12:57:03 2024 00:10:32.425 read: IOPS=4262, BW=16.6MiB/s (17.5MB/s)(16.7MiB/1003msec) 00:10:32.425 slat (usec): min=4, max=6175, avg=110.39, stdev=404.58 00:10:32.425 clat (usec): min=506, max=21509, avg=14558.84, stdev=1968.16 00:10:32.425 lat (usec): min=2027, max=23861, avg=14669.23, stdev=1936.12 00:10:32.425 clat percentiles (usec): 00:10:32.425 | 1.00th=[ 6128], 5.00th=[11469], 10.00th=[12911], 20.00th=[13829], 00:10:32.425 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:10:32.425 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15533], 95.00th=[16057], 00:10:32.425 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:10:32.425 | 99.99th=[21627] 00:10:32.425 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:32.425 slat (usec): min=13, max=3932, avg=106.51, stdev=459.98 00:10:32.425 clat (usec): min=8920, max=18346, avg=13985.83, stdev=1176.53 00:10:32.425 lat (usec): min=9044, max=18362, avg=14092.34, stdev=1101.50 00:10:32.425 clat percentiles (usec): 00:10:32.425 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11994], 20.00th=[13304], 00:10:32.425 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:10:32.425 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15533], 00:10:32.425 | 99.00th=[16319], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:10:32.425 | 99.99th=[18220] 00:10:32.425 bw ( KiB/s): min=18280, max=18584, per=37.67%, avg=18432.00, stdev=214.96, samples=2 00:10:32.425 iops : min= 4570, max= 4646, avg=4608.00, stdev=53.74, samples=2 00:10:32.425 lat (usec) : 750=0.01% 00:10:32.425 lat (msec) : 4=0.24%, 10=0.98%, 20=97.86%, 50=0.91% 00:10:32.425 cpu : usr=3.69%, sys=14.07%, ctx=417, majf=0, minf=1 00:10:32.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:32.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.425 issued rwts: total=4275,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.425 job1: (groupid=0, jobs=1): err= 0: pid=66550: Fri Nov 29 12:57:03 2024 00:10:32.425 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:10:32.425 slat (usec): min=7, max=7177, avg=263.70, stdev=887.03 00:10:32.425 clat (usec): min=18459, max=42553, avg=34242.51, stdev=3531.03 00:10:32.425 lat (usec): min=22907, max=42872, avg=34506.21, stdev=3465.66 00:10:32.425 clat percentiles (usec): 00:10:32.425 | 1.00th=[23987], 5.00th=[28181], 10.00th=[30016], 20.00th=[31327], 00:10:32.425 | 30.00th=[32637], 40.00th=[33424], 50.00th=[34866], 60.00th=[35390], 00:10:32.425 | 70.00th=[36439], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:10:32.425 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:32.425 | 99.99th=[42730] 00:10:32.425 write: IOPS=2119, BW=8477KiB/s (8680kB/s)(8536KiB/1007msec); 0 zone resets 00:10:32.425 slat (usec): min=13, max=7837, avg=207.08, stdev=765.18 00:10:32.425 clat (usec): min=6175, max=38455, avg=26484.23, stdev=4829.43 00:10:32.425 lat (usec): min=6962, max=38498, avg=26691.31, stdev=4824.91 00:10:32.425 clat percentiles (usec): 00:10:32.425 | 1.00th=[10814], 5.00th=[19268], 10.00th=[20579], 20.00th=[22938], 00:10:32.425 | 30.00th=[24249], 40.00th=[24773], 50.00th=[26346], 60.00th=[27657], 00:10:32.425 | 70.00th=[28967], 80.00th=[31065], 90.00th=[32637], 95.00th=[33817], 00:10:32.425 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[38011], 00:10:32.425 | 99.99th=[38536] 00:10:32.425 bw ( KiB/s): min= 8192, max= 8208, per=16.76%, avg=8200.00, stdev=11.31, samples=2 00:10:32.425 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:10:32.425 lat (msec) : 10=0.45%, 20=3.16%, 50=96.39% 00:10:32.426 cpu : usr=2.29%, sys=6.96%, ctx=710, majf=0, minf=6 00:10:32.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:32.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.426 issued rwts: total=2048,2134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.426 job2: (groupid=0, jobs=1): err= 0: pid=66551: Fri Nov 29 12:57:03 2024 00:10:32.426 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:10:32.426 slat (usec): min=5, max=10036, avg=239.01, stdev=960.16 00:10:32.426 clat (usec): min=10762, max=46380, avg=30197.98, stdev=7640.28 00:10:32.426 lat (usec): min=13484, max=46415, avg=30436.99, stdev=7681.57 00:10:32.426 clat percentiles (usec): 00:10:32.426 | 1.00th=[13566], 5.00th=[13829], 10.00th=[18220], 20.00th=[23987], 00:10:32.426 | 30.00th=[26870], 40.00th=[29754], 50.00th=[32113], 60.00th=[33424], 00:10:32.426 | 70.00th=[35390], 80.00th=[36963], 90.00th=[39060], 95.00th=[40109], 00:10:32.426 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:32.426 | 99.99th=[46400] 00:10:32.426 write: IOPS=2498, BW=9992KiB/s (10.2MB/s)(9.84MiB/1008msec); 0 zone resets 00:10:32.426 slat (usec): min=6, max=7013, avg=195.61, stdev=643.70 00:10:32.426 clat (usec): min=5412, max=41470, avg=25981.99, stdev=7788.66 00:10:32.426 lat (usec): min=8132, max=41499, avg=26177.60, stdev=7831.13 00:10:32.426 clat percentiles (usec): 00:10:32.426 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13698], 00:10:32.426 | 30.00th=[25560], 40.00th=[27395], 50.00th=[28443], 60.00th=[29230], 00:10:32.426 | 70.00th=[30278], 80.00th=[31327], 90.00th=[34866], 95.00th=[36963], 00:10:32.426 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:10:32.426 | 99.99th=[41681] 00:10:32.426 bw ( KiB/s): min= 7800, max=11320, per=19.54%, avg=9560.00, stdev=2489.02, samples=2 00:10:32.426 iops : min= 1950, max= 2830, avg=2390.00, stdev=622.25, samples=2 00:10:32.426 lat (msec) : 10=0.26%, 20=18.20%, 50=81.54% 00:10:32.426 cpu : usr=2.78%, sys=6.95%, ctx=806, majf=0, minf=1 00:10:32.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:32.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.426 issued rwts: total=2048,2518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.426 job3: (groupid=0, jobs=1): err= 0: pid=66553: Fri Nov 29 12:57:03 2024 00:10:32.426 read: IOPS=3022, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1006msec) 00:10:32.426 slat (usec): min=4, max=5497, avg=159.96, stdev=786.20 00:10:32.426 clat (usec): min=1595, max=25860, avg=20571.36, stdev=2185.97 00:10:32.426 lat (usec): min=5358, max=25872, avg=20731.32, stdev=2042.60 00:10:32.426 clat percentiles (usec): 00:10:32.426 | 1.00th=[ 5997], 5.00th=[16581], 10.00th=[20317], 20.00th=[20579], 00:10:32.426 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:10:32.426 | 70.00th=[21365], 80.00th=[21365], 90.00th=[21627], 95.00th=[21890], 00:10:32.426 | 99.00th=[24249], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:10:32.426 | 99.99th=[25822] 00:10:32.426 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:32.426 slat (usec): min=12, max=5673, avg=158.55, stdev=746.45 00:10:32.426 clat (usec): min=15336, max=25496, avg=20744.78, stdev=1609.47 00:10:32.426 lat (usec): min=16024, max=25520, avg=20903.33, stdev=1445.02 00:10:32.426 clat percentiles (usec): 00:10:32.426 | 1.00th=[16057], 5.00th=[18482], 10.00th=[19792], 20.00th=[20055], 00:10:32.426 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20579], 00:10:32.426 | 70.00th=[20841], 80.00th=[21103], 90.00th=[23725], 95.00th=[24511], 00:10:32.426 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:10:32.426 | 99.99th=[25560] 00:10:32.426 bw ( KiB/s): min=12288, max=12288, per=25.11%, avg=12288.00, stdev= 0.00, samples=2 00:10:32.426 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:32.426 lat (msec) : 2=0.02%, 10=0.52%, 20=14.28%, 50=85.18% 00:10:32.426 cpu : usr=3.48%, sys=9.35%, ctx=220, majf=0, minf=5 00:10:32.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:32.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.426 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.426 00:10:32.426 Run status group 0 (all jobs): 00:10:32.426 READ: bw=44.2MiB/s (46.4MB/s), 8127KiB/s-16.6MiB/s (8322kB/s-17.5MB/s), io=44.6MiB (46.7MB), run=1003-1008msec 00:10:32.426 WRITE: bw=47.8MiB/s (50.1MB/s), 8477KiB/s-17.9MiB/s (8680kB/s-18.8MB/s), io=48.2MiB (50.5MB), run=1003-1008msec 00:10:32.426 00:10:32.426 Disk stats (read/write): 00:10:32.426 nvme0n1: ios=3684/4096, merge=0/0, ticks=12800/12255, in_queue=25055, util=90.17% 00:10:32.426 nvme0n2: ios=1655/2048, merge=0/0, ticks=13327/12877, in_queue=26204, util=89.29% 00:10:32.426 nvme0n3: ios=1918/2048, merge=0/0, ticks=14158/11899, in_queue=26057, util=89.87% 00:10:32.426 nvme0n4: ios=2581/2720, merge=0/0, ticks=12691/12591, in_queue=25282, util=90.02% 00:10:32.426 12:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:32.426 [global] 00:10:32.426 thread=1 00:10:32.426 invalidate=1 00:10:32.426 rw=randwrite 00:10:32.426 time_based=1 00:10:32.426 runtime=1 00:10:32.426 ioengine=libaio 00:10:32.426 direct=1 00:10:32.426 bs=4096 00:10:32.426 iodepth=128 00:10:32.426 norandommap=0 00:10:32.426 numjobs=1 00:10:32.426 00:10:32.426 verify_dump=1 00:10:32.426 verify_backlog=512 00:10:32.426 verify_state_save=0 00:10:32.426 do_verify=1 00:10:32.426 verify=crc32c-intel 00:10:32.426 [job0] 00:10:32.426 filename=/dev/nvme0n1 00:10:32.426 [job1] 00:10:32.426 filename=/dev/nvme0n2 00:10:32.426 [job2] 00:10:32.426 filename=/dev/nvme0n3 00:10:32.426 [job3] 00:10:32.426 filename=/dev/nvme0n4 00:10:32.426 Could not set queue depth (nvme0n1) 00:10:32.426 Could not set queue depth (nvme0n2) 00:10:32.426 Could not set queue depth (nvme0n3) 00:10:32.426 Could not set queue depth (nvme0n4) 00:10:32.426 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.426 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.426 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.426 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.426 fio-3.35 00:10:32.426 Starting 4 threads 00:10:33.801 00:10:33.801 job0: (groupid=0, jobs=1): err= 0: pid=66611: Fri Nov 29 12:57:04 2024 00:10:33.801 read: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1002msec) 00:10:33.801 slat (usec): min=4, max=13345, avg=151.06, stdev=809.62 00:10:33.801 clat (usec): min=1465, max=45207, avg=18915.96, stdev=5768.57 00:10:33.801 lat (usec): min=1473, max=45224, avg=19067.02, stdev=5766.38 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[ 7373], 5.00th=[13435], 10.00th=[14484], 20.00th=[15795], 00:10:33.801 | 30.00th=[15926], 40.00th=[15926], 50.00th=[16057], 60.00th=[17433], 00:10:33.801 | 70.00th=[20579], 80.00th=[24511], 90.00th=[25297], 95.00th=[26346], 00:10:33.801 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:33.801 | 99.99th=[45351] 00:10:33.801 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:10:33.801 slat (usec): min=14, max=5610, avg=121.43, stdev=569.87 00:10:33.801 clat (usec): min=9840, max=40786, avg=16521.22, stdev=4762.00 00:10:33.801 lat (usec): min=11963, max=40835, avg=16642.65, stdev=4744.35 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[10552], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:10:33.801 | 30.00th=[12780], 40.00th=[13698], 50.00th=[16450], 60.00th=[16909], 00:10:33.801 | 70.00th=[17171], 80.00th=[19268], 90.00th=[23987], 95.00th=[24773], 00:10:33.801 | 99.00th=[32900], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:10:33.801 | 99.99th=[40633] 00:10:33.801 bw ( KiB/s): min=12288, max=16384, per=28.95%, avg=14336.00, stdev=2896.31, samples=2 00:10:33.801 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:33.801 lat (msec) : 2=0.15%, 10=0.55%, 20=74.42%, 50=24.88% 00:10:33.801 cpu : usr=3.60%, sys=10.79%, ctx=225, majf=0, minf=15 00:10:33.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:33.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.801 issued rwts: total=3531,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.801 job1: (groupid=0, jobs=1): err= 0: pid=66613: Fri Nov 29 12:57:04 2024 00:10:33.801 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:33.801 slat (usec): min=5, max=6784, avg=160.16, stdev=799.36 00:10:33.801 clat (usec): min=11950, max=24346, avg=20698.13, stdev=1293.25 00:10:33.801 lat (usec): min=11962, max=24440, avg=20858.28, stdev=1041.39 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[13566], 5.00th=[19006], 10.00th=[20317], 20.00th=[20317], 00:10:33.801 | 30.00th=[20579], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:10:33.801 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21365], 95.00th=[21890], 00:10:33.801 | 99.00th=[23200], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:10:33.801 | 99.99th=[24249] 00:10:33.801 write: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:10:33.801 slat (usec): min=8, max=5839, avg=155.46, stdev=761.49 00:10:33.801 clat (usec): min=512, max=24596, avg=20062.72, stdev=2142.35 00:10:33.801 lat (usec): min=537, max=24613, avg=20218.18, stdev=1996.77 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[ 5342], 5.00th=[18744], 10.00th=[19530], 20.00th=[19792], 00:10:33.801 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:10:33.801 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21365], 00:10:33.801 | 99.00th=[23462], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:10:33.801 | 99.99th=[24511] 00:10:33.801 bw ( KiB/s): min=12288, max=12312, per=24.83%, avg=12300.00, stdev=16.97, samples=2 00:10:33.801 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:33.801 lat (usec) : 750=0.03% 00:10:33.801 lat (msec) : 10=0.77%, 20=17.26%, 50=81.93% 00:10:33.801 cpu : usr=3.00%, sys=8.00%, ctx=244, majf=0, minf=14 00:10:33.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:33.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.801 issued rwts: total=3072,3122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.801 job2: (groupid=0, jobs=1): err= 0: pid=66618: Fri Nov 29 12:57:04 2024 00:10:33.801 read: IOPS=2464, BW=9856KiB/s (10.1MB/s)(9876KiB/1002msec) 00:10:33.801 slat (usec): min=6, max=8506, avg=175.30, stdev=735.88 00:10:33.801 clat (usec): min=1466, max=38631, avg=21453.86, stdev=4407.94 00:10:33.801 lat (usec): min=3070, max=38652, avg=21629.16, stdev=4468.53 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[ 7111], 5.00th=[16319], 10.00th=[18220], 20.00th=[18482], 00:10:33.801 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20841], 60.00th=[22676], 00:10:33.801 | 70.00th=[23725], 80.00th=[25560], 90.00th=[25822], 95.00th=[27132], 00:10:33.801 | 99.00th=[33162], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:10:33.801 | 99.99th=[38536] 00:10:33.801 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:33.801 slat (usec): min=14, max=5312, avg=211.90, stdev=724.64 00:10:33.801 clat (usec): min=13651, max=51016, avg=28663.45, stdev=10458.14 00:10:33.801 lat (usec): min=13675, max=51041, avg=28875.35, stdev=10526.29 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[13829], 5.00th=[14484], 10.00th=[16319], 20.00th=[18220], 00:10:33.801 | 30.00th=[18482], 40.00th=[23725], 50.00th=[30802], 60.00th=[31851], 00:10:33.801 | 70.00th=[33817], 80.00th=[38536], 90.00th=[43779], 95.00th=[46924], 00:10:33.801 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:10:33.801 | 99.99th=[51119] 00:10:33.801 bw ( KiB/s): min=10160, max=10320, per=20.68%, avg=10240.00, stdev=113.14, samples=2 00:10:33.801 iops : min= 2540, max= 2580, avg=2560.00, stdev=28.28, samples=2 00:10:33.801 lat (msec) : 2=0.02%, 4=0.40%, 10=0.84%, 20=38.95%, 50=59.24% 00:10:33.801 lat (msec) : 100=0.56% 00:10:33.801 cpu : usr=2.50%, sys=9.09%, ctx=334, majf=0, minf=15 00:10:33.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:33.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.801 issued rwts: total=2469,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.801 job3: (groupid=0, jobs=1): err= 0: pid=66621: Fri Nov 29 12:57:04 2024 00:10:33.801 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:33.801 slat (usec): min=5, max=5669, avg=158.39, stdev=811.68 00:10:33.801 clat (usec): min=15318, max=22695, avg=20782.02, stdev=1025.11 00:10:33.801 lat (usec): min=16468, max=22716, avg=20940.42, stdev=648.55 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[16188], 5.00th=[19268], 10.00th=[20317], 20.00th=[20579], 00:10:33.801 | 30.00th=[20841], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:10:33.801 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21627], 95.00th=[21627], 00:10:33.801 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:10:33.801 | 99.99th=[22676] 00:10:33.801 write: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1003msec); 0 zone resets 00:10:33.801 slat (usec): min=8, max=5533, avg=156.09, stdev=761.70 00:10:33.801 clat (usec): min=1150, max=21600, avg=19810.36, stdev=2217.47 00:10:33.801 lat (usec): min=3995, max=21624, avg=19966.45, stdev=2085.00 00:10:33.801 clat percentiles (usec): 00:10:33.801 | 1.00th=[ 4883], 5.00th=[16581], 10.00th=[19268], 20.00th=[19792], 00:10:33.801 | 30.00th=[20055], 40.00th=[20055], 50.00th=[20317], 60.00th=[20317], 00:10:33.801 | 70.00th=[20579], 80.00th=[20579], 90.00th=[20841], 95.00th=[21365], 00:10:33.801 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:10:33.801 | 99.99th=[21627] 00:10:33.801 bw ( KiB/s): min=12288, max=12288, per=24.81%, avg=12288.00, stdev= 0.00, samples=2 00:10:33.801 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:33.801 lat (msec) : 2=0.02%, 10=0.77%, 20=19.12%, 50=80.10% 00:10:33.801 cpu : usr=2.99%, sys=7.68%, ctx=233, majf=0, minf=9 00:10:33.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:33.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.801 issued rwts: total=3072,3153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.801 00:10:33.801 Run status group 0 (all jobs): 00:10:33.801 READ: bw=47.3MiB/s (49.6MB/s), 9856KiB/s-13.8MiB/s (10.1MB/s-14.4MB/s), io=47.4MiB (49.7MB), run=1001-1003msec 00:10:33.801 WRITE: bw=48.4MiB/s (50.7MB/s), 9.98MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=48.5MiB (50.9MB), run=1001-1003msec 00:10:33.801 00:10:33.801 Disk stats (read/write): 00:10:33.801 nvme0n1: ios=3122/3360, merge=0/0, ticks=12994/11573, in_queue=24567, util=88.26% 00:10:33.801 nvme0n2: ios=2580/2752, merge=0/0, ticks=11961/11852, in_queue=23813, util=87.39% 00:10:33.801 nvme0n3: ios=1955/2048, merge=0/0, ticks=14208/20114, in_queue=34322, util=88.99% 00:10:33.801 nvme0n4: ios=2560/2752, merge=0/0, ticks=11466/11402, in_queue=22868, util=89.64% 00:10:33.801 12:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:33.801 12:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66634 00:10:33.801 12:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:33.801 12:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:33.801 [global] 00:10:33.801 thread=1 00:10:33.801 invalidate=1 00:10:33.801 rw=read 00:10:33.801 time_based=1 00:10:33.801 runtime=10 00:10:33.801 ioengine=libaio 00:10:33.801 direct=1 00:10:33.801 bs=4096 00:10:33.801 iodepth=1 00:10:33.801 norandommap=1 00:10:33.801 numjobs=1 00:10:33.801 00:10:33.801 [job0] 00:10:33.801 filename=/dev/nvme0n1 00:10:33.801 [job1] 00:10:33.802 filename=/dev/nvme0n2 00:10:33.802 [job2] 00:10:33.802 filename=/dev/nvme0n3 00:10:33.802 [job3] 00:10:33.802 filename=/dev/nvme0n4 00:10:33.802 Could not set queue depth (nvme0n1) 00:10:33.802 Could not set queue depth (nvme0n2) 00:10:33.802 Could not set queue depth (nvme0n3) 00:10:33.802 Could not set queue depth (nvme0n4) 00:10:33.802 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.802 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.802 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.802 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.802 fio-3.35 00:10:33.802 Starting 4 threads 00:10:37.084 12:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:37.084 fio: pid=66677, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.084 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42115072, buflen=4096 00:10:37.084 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:37.084 fio: pid=66676, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.084 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45957120, buflen=4096 00:10:37.084 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.084 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:37.343 fio: pid=66674, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.343 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62263296, buflen=4096 00:10:37.343 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.343 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:37.602 fio: pid=66675, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.602 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58171392, buflen=4096 00:10:37.861 00:10:37.861 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66674: Fri Nov 29 12:57:09 2024 00:10:37.861 read: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(59.4MiB/3483msec) 00:10:37.861 slat (usec): min=10, max=19778, avg=17.74, stdev=225.66 00:10:37.861 clat (usec): min=133, max=2479, avg=210.20, stdev=55.41 00:10:37.861 lat (usec): min=147, max=20020, avg=227.94, stdev=232.88 00:10:37.861 clat percentiles (usec): 00:10:37.861 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:10:37.861 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:10:37.861 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 269], 00:10:37.861 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 783], 99.95th=[ 1303], 00:10:37.861 | 99.99th=[ 2212] 00:10:37.861 bw ( KiB/s): min=15376, max=18624, per=32.99%, avg=17528.00, stdev=1252.97, samples=6 00:10:37.861 iops : min= 3844, max= 4656, avg=4382.00, stdev=313.24, samples=6 00:10:37.861 lat (usec) : 250=87.90%, 500=11.88%, 750=0.11%, 1000=0.04% 00:10:37.861 lat (msec) : 2=0.05%, 4=0.02% 00:10:37.861 cpu : usr=1.23%, sys=5.28%, ctx=15206, majf=0, minf=1 00:10:37.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.861 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.861 issued rwts: total=15202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.861 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66675: Fri Nov 29 12:57:09 2024 00:10:37.861 read: IOPS=3706, BW=14.5MiB/s (15.2MB/s)(55.5MiB/3832msec) 00:10:37.861 slat (usec): min=7, max=12959, avg=18.48, stdev=198.60 00:10:37.861 clat (usec): min=130, max=3197, avg=249.86, stdev=62.79 00:10:37.861 lat (usec): min=142, max=13196, avg=268.34, stdev=207.77 00:10:37.861 clat percentiles (usec): 00:10:37.861 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 172], 20.00th=[ 196], 00:10:37.862 | 30.00th=[ 233], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 269], 00:10:37.862 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:10:37.862 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 510], 99.95th=[ 693], 00:10:37.862 | 99.99th=[ 3130] 00:10:37.862 bw ( KiB/s): min=13376, max=17137, per=27.22%, avg=14463.00, stdev=1412.55, samples=7 00:10:37.862 iops : min= 3344, max= 4284, avg=3615.71, stdev=353.06, samples=7 00:10:37.862 lat (usec) : 250=41.43%, 500=58.46%, 750=0.06%, 1000=0.01% 00:10:37.862 lat (msec) : 2=0.01%, 4=0.01% 00:10:37.862 cpu : usr=1.15%, sys=4.80%, ctx=14214, majf=0, minf=2 00:10:37.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 issued rwts: total=14203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.862 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66676: Fri Nov 29 12:57:09 2024 00:10:37.862 read: IOPS=3476, BW=13.6MiB/s (14.2MB/s)(43.8MiB/3228msec) 00:10:37.862 slat (usec): min=12, max=12776, avg=17.34, stdev=142.31 00:10:37.862 clat (usec): min=191, max=2067, avg=269.00, stdev=42.24 00:10:37.862 lat (usec): min=211, max=13092, avg=286.34, stdev=148.90 00:10:37.862 clat percentiles (usec): 00:10:37.862 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 245], 00:10:37.862 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:10:37.862 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:10:37.862 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 553], 99.95th=[ 881], 00:10:37.862 | 99.99th=[ 1893] 00:10:37.862 bw ( KiB/s): min=13488, max=15072, per=26.30%, avg=13973.33, stdev=620.64, samples=6 00:10:37.862 iops : min= 3372, max= 3768, avg=3493.33, stdev=155.16, samples=6 00:10:37.862 lat (usec) : 250=25.38%, 500=74.50%, 750=0.05%, 1000=0.01% 00:10:37.862 lat (msec) : 2=0.04%, 4=0.01% 00:10:37.862 cpu : usr=1.27%, sys=4.25%, ctx=11225, majf=0, minf=1 00:10:37.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 issued rwts: total=11221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.862 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66677: Fri Nov 29 12:57:09 2024 00:10:37.862 read: IOPS=3459, BW=13.5MiB/s (14.2MB/s)(40.2MiB/2972msec) 00:10:37.862 slat (nsec): min=8219, max=85296, avg=12314.78, stdev=4309.52 00:10:37.862 clat (usec): min=167, max=6988, avg=275.28, stdev=95.81 00:10:37.862 lat (usec): min=182, max=7000, avg=287.59, stdev=95.64 00:10:37.862 clat percentiles (usec): 00:10:37.862 | 1.00th=[ 196], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 249], 00:10:37.862 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:10:37.862 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 330], 00:10:37.862 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 644], 99.95th=[ 1336], 00:10:37.862 | 99.99th=[ 5473] 00:10:37.862 bw ( KiB/s): min=13464, max=15368, per=26.31%, avg=13979.20, stdev=787.96, samples=5 00:10:37.862 iops : min= 3366, max= 3842, avg=3494.80, stdev=196.99, samples=5 00:10:37.862 lat (usec) : 250=21.88%, 500=77.99%, 750=0.03%, 1000=0.02% 00:10:37.862 lat (msec) : 2=0.04%, 4=0.01%, 10=0.02% 00:10:37.862 cpu : usr=0.94%, sys=3.90%, ctx=10285, majf=0, minf=2 00:10:37.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.862 issued rwts: total=10283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.862 00:10:37.862 Run status group 0 (all jobs): 00:10:37.862 READ: bw=51.9MiB/s (54.4MB/s), 13.5MiB/s-17.0MiB/s (14.2MB/s-17.9MB/s), io=199MiB (209MB), run=2972-3832msec 00:10:37.862 00:10:37.862 Disk stats (read/write): 00:10:37.862 nvme0n1: ios=14624/0, merge=0/0, ticks=3150/0, in_queue=3150, util=95.13% 00:10:37.862 nvme0n2: ios=13165/0, merge=0/0, ticks=3368/0, in_queue=3368, util=95.59% 00:10:37.862 nvme0n3: ios=10841/0, merge=0/0, ticks=2966/0, in_queue=2966, util=96.28% 00:10:37.862 nvme0n4: ios=9968/0, merge=0/0, ticks=2658/0, in_queue=2658, util=96.67% 00:10:37.862 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.862 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:38.121 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.121 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:38.379 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.379 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:38.638 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.638 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:38.897 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.897 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66634 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.156 nvmf hotplug test: fio failed as expected 00:10:39.156 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.414 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:39.414 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:39.414 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:39.414 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.415 rmmod nvme_tcp 00:10:39.415 rmmod nvme_fabrics 00:10:39.415 rmmod nvme_keyring 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66245 ']' 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66245 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66245 ']' 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66245 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.415 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66245 00:10:39.689 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.689 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.689 killing process with pid 66245 00:10:39.689 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66245' 00:10:39.689 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66245 00:10:39.689 12:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66245 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:39.689 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:39.974 00:10:39.974 real 0m19.894s 00:10:39.974 user 1m15.936s 00:10:39.974 sys 0m8.869s 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.974 ************************************ 00:10:39.974 END TEST nvmf_fio_target 00:10:39.974 ************************************ 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.974 ************************************ 00:10:39.974 START TEST nvmf_bdevio 00:10:39.974 ************************************ 00:10:39.974 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.234 * Looking for test storage... 00:10:40.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.234 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.235 --rc genhtml_branch_coverage=1 00:10:40.235 --rc genhtml_function_coverage=1 00:10:40.235 --rc genhtml_legend=1 00:10:40.235 --rc geninfo_all_blocks=1 00:10:40.235 --rc geninfo_unexecuted_blocks=1 00:10:40.235 00:10:40.235 ' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.235 --rc genhtml_branch_coverage=1 00:10:40.235 --rc genhtml_function_coverage=1 00:10:40.235 --rc genhtml_legend=1 00:10:40.235 --rc geninfo_all_blocks=1 00:10:40.235 --rc geninfo_unexecuted_blocks=1 00:10:40.235 00:10:40.235 ' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.235 --rc genhtml_branch_coverage=1 00:10:40.235 --rc genhtml_function_coverage=1 00:10:40.235 --rc genhtml_legend=1 00:10:40.235 --rc geninfo_all_blocks=1 00:10:40.235 --rc geninfo_unexecuted_blocks=1 00:10:40.235 00:10:40.235 ' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.235 --rc genhtml_branch_coverage=1 00:10:40.235 --rc genhtml_function_coverage=1 00:10:40.235 --rc genhtml_legend=1 00:10:40.235 --rc geninfo_all_blocks=1 00:10:40.235 --rc geninfo_unexecuted_blocks=1 00:10:40.235 00:10:40.235 ' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.235 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:40.235 Cannot find device "nvmf_init_br" 00:10:40.235 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:40.236 Cannot find device "nvmf_init_br2" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:40.236 Cannot find device "nvmf_tgt_br" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.236 Cannot find device "nvmf_tgt_br2" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:40.236 Cannot find device "nvmf_init_br" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:40.236 Cannot find device "nvmf_init_br2" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:40.236 Cannot find device "nvmf_tgt_br" 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:40.236 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:40.494 Cannot find device "nvmf_tgt_br2" 00:10:40.494 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:40.495 Cannot find device "nvmf_br" 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:40.495 Cannot find device "nvmf_init_if" 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:40.495 Cannot find device "nvmf_init_if2" 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.495 12:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:40.495 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:40.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:40.754 00:10:40.754 --- 10.0.0.3 ping statistics --- 00:10:40.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.754 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:40.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:40.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:10:40.754 00:10:40.754 --- 10.0.0.4 ping statistics --- 00:10:40.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.754 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:40.754 00:10:40.754 --- 10.0.0.1 ping statistics --- 00:10:40.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.754 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:40.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:40.754 00:10:40.754 --- 10.0.0.2 ping statistics --- 00:10:40.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.754 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66998 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66998 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66998 ']' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.754 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.755 [2024-11-29 12:57:12.154997] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:10:40.755 [2024-11-29 12:57:12.155088] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.013 [2024-11-29 12:57:12.305152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.013 [2024-11-29 12:57:12.366987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.013 [2024-11-29 12:57:12.367044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.013 [2024-11-29 12:57:12.367056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.013 [2024-11-29 12:57:12.367064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.013 [2024-11-29 12:57:12.367071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.013 [2024-11-29 12:57:12.368791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.013 [2024-11-29 12:57:12.368925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.013 [2024-11-29 12:57:12.371945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:41.013 [2024-11-29 12:57:12.371959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.013 [2024-11-29 12:57:12.428699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.013 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.013 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:41.013 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.013 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.013 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 [2024-11-29 12:57:12.537733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 Malloc0 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 [2024-11-29 12:57:12.611551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:41.272 { 00:10:41.272 "params": { 00:10:41.272 "name": "Nvme$subsystem", 00:10:41.272 "trtype": "$TEST_TRANSPORT", 00:10:41.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.272 "adrfam": "ipv4", 00:10:41.272 "trsvcid": "$NVMF_PORT", 00:10:41.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.272 "hdgst": ${hdgst:-false}, 00:10:41.272 "ddgst": ${ddgst:-false} 00:10:41.272 }, 00:10:41.272 "method": "bdev_nvme_attach_controller" 00:10:41.272 } 00:10:41.272 EOF 00:10:41.272 )") 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:41.272 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:41.272 "params": { 00:10:41.272 "name": "Nvme1", 00:10:41.272 "trtype": "tcp", 00:10:41.272 "traddr": "10.0.0.3", 00:10:41.272 "adrfam": "ipv4", 00:10:41.272 "trsvcid": "4420", 00:10:41.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.272 "hdgst": false, 00:10:41.272 "ddgst": false 00:10:41.272 }, 00:10:41.272 "method": "bdev_nvme_attach_controller" 00:10:41.272 }' 00:10:41.272 [2024-11-29 12:57:12.672610] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:10:41.272 [2024-11-29 12:57:12.672699] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67026 ] 00:10:41.531 [2024-11-29 12:57:12.825556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.531 [2024-11-29 12:57:12.893350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.531 [2024-11-29 12:57:12.893482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.531 [2024-11-29 12:57:12.893491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.531 [2024-11-29 12:57:12.960969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.789 I/O targets: 00:10:41.789 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:41.789 00:10:41.789 00:10:41.789 CUnit - A unit testing framework for C - Version 2.1-3 00:10:41.789 http://cunit.sourceforge.net/ 00:10:41.789 00:10:41.789 00:10:41.789 Suite: bdevio tests on: Nvme1n1 00:10:41.789 Test: blockdev write read block ...passed 00:10:41.789 Test: blockdev write zeroes read block ...passed 00:10:41.789 Test: blockdev write zeroes read no split ...passed 00:10:41.789 Test: blockdev write zeroes read split ...passed 00:10:41.789 Test: blockdev write zeroes read split partial ...passed 00:10:41.789 Test: blockdev reset ...[2024-11-29 12:57:13.113280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:41.789 [2024-11-29 12:57:13.113406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352190 (9): Bad file descriptor 00:10:41.789 passed 00:10:41.789 Test: blockdev write read 8 blocks ...[2024-11-29 12:57:13.127932] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:41.789 passed 00:10:41.789 Test: blockdev write read size > 128k ...passed 00:10:41.789 Test: blockdev write read invalid size ...passed 00:10:41.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.789 Test: blockdev write read max offset ...passed 00:10:41.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.789 Test: blockdev writev readv 8 blocks ...passed 00:10:41.789 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.789 Test: blockdev writev readv block ...passed 00:10:41.789 Test: blockdev writev readv size > 128k ...passed 00:10:41.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.789 Test: blockdev comparev and writev ...passed 00:10:41.789 Test: blockdev nvme passthru rw ...[2024-11-29 12:57:13.135840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.789 [2024-11-29 12:57:13.135948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:41.789 [2024-11-29 12:57:13.135967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.789 [2024-11-29 12:57:13.135978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:41.789 [2024-11-29 12:57:13.136271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.789 [2024-11-29 12:57:13.136288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.136302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.790 [2024-11-29 12:57:13.136312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.136563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.790 [2024-11-29 12:57:13.136579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.136594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.790 [2024-11-29 12:57:13.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.136864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.790 [2024-11-29 12:57:13.136891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.136920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.790 [2024-11-29 12:57:13.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:41.790 passed 00:10:41.790 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.790 Test: blockdev nvme admin passthru ...[2024-11-29 12:57:13.137659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.790 [2024-11-29 12:57:13.137682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.137784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.790 [2024-11-29 12:57:13.137799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.137920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.790 [2024-11-29 12:57:13.137936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:41.790 [2024-11-29 12:57:13.138046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.790 [2024-11-29 12:57:13.138061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:41.790 passed 00:10:41.790 Test: blockdev copy ...passed 00:10:41.790 00:10:41.790 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.790 suites 1 1 n/a 0 0 00:10:41.790 tests 23 23 23 0 0 00:10:41.790 asserts 152 152 152 0 n/a 00:10:41.790 00:10:41.790 Elapsed time = 0.146 seconds 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.049 rmmod nvme_tcp 00:10:42.049 rmmod nvme_fabrics 00:10:42.049 rmmod nvme_keyring 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66998 ']' 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66998 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66998 ']' 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66998 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66998 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:42.049 killing process with pid 66998 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66998' 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66998 00:10:42.049 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66998 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.307 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:42.308 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:42.308 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:42.308 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:42.308 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:42.566 00:10:42.566 real 0m2.560s 00:10:42.566 user 0m6.654s 00:10:42.566 sys 0m0.862s 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.566 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.566 ************************************ 00:10:42.566 END TEST nvmf_bdevio 00:10:42.566 ************************************ 00:10:42.566 12:57:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:42.566 ************************************ 00:10:42.566 END TEST nvmf_target_core 00:10:42.566 ************************************ 00:10:42.566 00:10:42.566 real 2m37.642s 00:10:42.566 user 6m54.543s 00:10:42.566 sys 0m51.188s 00:10:42.566 12:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.566 12:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.566 12:57:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:42.566 12:57:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.566 12:57:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.566 12:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.826 ************************************ 00:10:42.826 START TEST nvmf_target_extra 00:10:42.826 ************************************ 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:42.826 * Looking for test storage... 00:10:42.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.826 --rc genhtml_branch_coverage=1 00:10:42.826 --rc genhtml_function_coverage=1 00:10:42.826 --rc genhtml_legend=1 00:10:42.826 --rc geninfo_all_blocks=1 00:10:42.826 --rc geninfo_unexecuted_blocks=1 00:10:42.826 00:10:42.826 ' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.826 --rc genhtml_branch_coverage=1 00:10:42.826 --rc genhtml_function_coverage=1 00:10:42.826 --rc genhtml_legend=1 00:10:42.826 --rc geninfo_all_blocks=1 00:10:42.826 --rc geninfo_unexecuted_blocks=1 00:10:42.826 00:10:42.826 ' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.826 --rc genhtml_branch_coverage=1 00:10:42.826 --rc genhtml_function_coverage=1 00:10:42.826 --rc genhtml_legend=1 00:10:42.826 --rc geninfo_all_blocks=1 00:10:42.826 --rc geninfo_unexecuted_blocks=1 00:10:42.826 00:10:42.826 ' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.826 --rc genhtml_branch_coverage=1 00:10:42.826 --rc genhtml_function_coverage=1 00:10:42.826 --rc genhtml_legend=1 00:10:42.826 --rc geninfo_all_blocks=1 00:10:42.826 --rc geninfo_unexecuted_blocks=1 00:10:42.826 00:10:42.826 ' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.826 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.827 ************************************ 00:10:42.827 START TEST nvmf_auth_target 00:10:42.827 ************************************ 00:10:42.827 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:43.086 * Looking for test storage... 00:10:43.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.086 --rc genhtml_branch_coverage=1 00:10:43.086 --rc genhtml_function_coverage=1 00:10:43.086 --rc genhtml_legend=1 00:10:43.086 --rc geninfo_all_blocks=1 00:10:43.086 --rc geninfo_unexecuted_blocks=1 00:10:43.086 00:10:43.086 ' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.086 --rc genhtml_branch_coverage=1 00:10:43.086 --rc genhtml_function_coverage=1 00:10:43.086 --rc genhtml_legend=1 00:10:43.086 --rc geninfo_all_blocks=1 00:10:43.086 --rc geninfo_unexecuted_blocks=1 00:10:43.086 00:10:43.086 ' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.086 --rc genhtml_branch_coverage=1 00:10:43.086 --rc genhtml_function_coverage=1 00:10:43.086 --rc genhtml_legend=1 00:10:43.086 --rc geninfo_all_blocks=1 00:10:43.086 --rc geninfo_unexecuted_blocks=1 00:10:43.086 00:10:43.086 ' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.086 --rc genhtml_branch_coverage=1 00:10:43.086 --rc genhtml_function_coverage=1 00:10:43.086 --rc genhtml_legend=1 00:10:43.086 --rc geninfo_all_blocks=1 00:10:43.086 --rc geninfo_unexecuted_blocks=1 00:10:43.086 00:10:43.086 ' 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:43.086 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.087 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:43.087 Cannot find device "nvmf_init_br" 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:43.087 Cannot find device "nvmf_init_br2" 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:43.087 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:43.346 Cannot find device "nvmf_tgt_br" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.346 Cannot find device "nvmf_tgt_br2" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:43.346 Cannot find device "nvmf_init_br" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:43.346 Cannot find device "nvmf_init_br2" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:43.346 Cannot find device "nvmf_tgt_br" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:43.346 Cannot find device "nvmf_tgt_br2" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:43.346 Cannot find device "nvmf_br" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:43.346 Cannot find device "nvmf_init_if" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:43.346 Cannot find device "nvmf_init_if2" 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.346 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:43.347 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:43.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:10:43.606 00:10:43.606 --- 10.0.0.3 ping statistics --- 00:10:43.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.606 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:43.606 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:43.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:43.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.138 ms 00:10:43.606 00:10:43.606 --- 10.0.0.4 ping statistics --- 00:10:43.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.607 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:43.607 00:10:43.607 --- 10.0.0.1 ping statistics --- 00:10:43.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.607 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:43.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:43.607 00:10:43.607 --- 10.0.0.2 ping statistics --- 00:10:43.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.607 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67308 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67308 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67308 ']' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.607 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67333 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b9d1c2871fd8ee884e80e7e852132fad5a3f304d7c91b1f 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Sxu 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b9d1c2871fd8ee884e80e7e852132fad5a3f304d7c91b1f 0 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b9d1c2871fd8ee884e80e7e852132fad5a3f304d7c91b1f 0 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b9d1c2871fd8ee884e80e7e852132fad5a3f304d7c91b1f 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Sxu 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Sxu 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Sxu 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1c6323bace5143a0f382b665a5f2a9373b4d00f8edc80cbc9fb0c5b387ef390b 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Le9 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1c6323bace5143a0f382b665a5f2a9373b4d00f8edc80cbc9fb0c5b387ef390b 3 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1c6323bace5143a0f382b665a5f2a9373b4d00f8edc80cbc9fb0c5b387ef390b 3 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1c6323bace5143a0f382b665a5f2a9373b4d00f8edc80cbc9fb0c5b387ef390b 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Le9 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Le9 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Le9 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85b2161ac4434240ccec8b38ba7e1a5c 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cgB 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85b2161ac4434240ccec8b38ba7e1a5c 1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85b2161ac4434240ccec8b38ba7e1a5c 1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85b2161ac4434240ccec8b38ba7e1a5c 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cgB 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cgB 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.cgB 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b26a78de295ee6654a6af43cab4574c7b8729f88607a290c 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pe4 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b26a78de295ee6654a6af43cab4574c7b8729f88607a290c 2 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b26a78de295ee6654a6af43cab4574c7b8729f88607a290c 2 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.175 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.176 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b26a78de295ee6654a6af43cab4574c7b8729f88607a290c 00:10:44.176 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:44.176 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pe4 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pe4 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Pe4 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d988879834f8fa2be5af3df165b6a3d461ee7460bede3d72 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Olc 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d988879834f8fa2be5af3df165b6a3d461ee7460bede3d72 2 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d988879834f8fa2be5af3df165b6a3d461ee7460bede3d72 2 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d988879834f8fa2be5af3df165b6a3d461ee7460bede3d72 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Olc 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Olc 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Olc 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff706e95ac2d8470223df0c3f09a7e1e 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZSi 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff706e95ac2d8470223df0c3f09a7e1e 1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff706e95ac2d8470223df0c3f09a7e1e 1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff706e95ac2d8470223df0c3f09a7e1e 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZSi 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZSi 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZSi 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6192d87a61024f667608f22cab46de27fdd0e202ea59a6d5f6591fc415156811 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gah 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6192d87a61024f667608f22cab46de27fdd0e202ea59a6d5f6591fc415156811 3 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6192d87a61024f667608f22cab46de27fdd0e202ea59a6d5f6591fc415156811 3 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6192d87a61024f667608f22cab46de27fdd0e202ea59a6d5f6591fc415156811 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gah 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gah 00:10:44.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gah 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67308 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67308 ']' 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.435 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67333 /var/tmp/host.sock 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67333 ']' 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:45.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:45.004 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.005 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sxu 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Sxu 00:10:45.264 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Sxu 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Le9 ]] 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le9 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le9 00:10:45.522 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le9 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cgB 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cgB 00:10:45.780 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cgB 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Pe4 ]] 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pe4 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pe4 00:10:46.038 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pe4 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Olc 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Olc 00:10:46.297 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Olc 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZSi ]] 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZSi 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZSi 00:10:46.555 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZSi 00:10:46.813 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gah 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gah 00:10:46.814 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gah 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:47.072 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.330 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.589 00:10:47.848 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.848 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.848 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.106 { 00:10:48.106 "cntlid": 1, 00:10:48.106 "qid": 0, 00:10:48.106 "state": "enabled", 00:10:48.106 "thread": "nvmf_tgt_poll_group_000", 00:10:48.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:10:48.106 "listen_address": { 00:10:48.106 "trtype": "TCP", 00:10:48.106 "adrfam": "IPv4", 00:10:48.106 "traddr": "10.0.0.3", 00:10:48.106 "trsvcid": "4420" 00:10:48.106 }, 00:10:48.106 "peer_address": { 00:10:48.106 "trtype": "TCP", 00:10:48.106 "adrfam": "IPv4", 00:10:48.106 "traddr": "10.0.0.1", 00:10:48.106 "trsvcid": "37860" 00:10:48.106 }, 00:10:48.106 "auth": { 00:10:48.106 "state": "completed", 00:10:48.106 "digest": "sha256", 00:10:48.106 "dhgroup": "null" 00:10:48.106 } 00:10:48.106 } 00:10:48.106 ]' 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.106 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.107 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:48.107 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.107 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.107 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.107 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.364 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:10:48.364 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.663 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.663 00:10:53.663 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.663 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.663 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.231 { 00:10:54.231 "cntlid": 3, 00:10:54.231 "qid": 0, 00:10:54.231 "state": "enabled", 00:10:54.231 "thread": "nvmf_tgt_poll_group_000", 00:10:54.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:10:54.231 "listen_address": { 00:10:54.231 "trtype": "TCP", 00:10:54.231 "adrfam": "IPv4", 00:10:54.231 "traddr": "10.0.0.3", 00:10:54.231 "trsvcid": "4420" 00:10:54.231 }, 00:10:54.231 "peer_address": { 00:10:54.231 "trtype": "TCP", 00:10:54.231 "adrfam": "IPv4", 00:10:54.231 "traddr": "10.0.0.1", 00:10:54.231 "trsvcid": "45940" 00:10:54.231 }, 00:10:54.231 "auth": { 00:10:54.231 "state": "completed", 00:10:54.231 "digest": "sha256", 00:10:54.231 "dhgroup": "null" 00:10:54.231 } 00:10:54.231 } 00:10:54.231 ]' 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.231 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.490 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:10:54.490 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.428 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.687 00:10:55.945 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.945 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.945 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.204 { 00:10:56.204 "cntlid": 5, 00:10:56.204 "qid": 0, 00:10:56.204 "state": "enabled", 00:10:56.204 "thread": "nvmf_tgt_poll_group_000", 00:10:56.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:10:56.204 "listen_address": { 00:10:56.204 "trtype": "TCP", 00:10:56.204 "adrfam": "IPv4", 00:10:56.204 "traddr": "10.0.0.3", 00:10:56.204 "trsvcid": "4420" 00:10:56.204 }, 00:10:56.204 "peer_address": { 00:10:56.204 "trtype": "TCP", 00:10:56.204 "adrfam": "IPv4", 00:10:56.204 "traddr": "10.0.0.1", 00:10:56.204 "trsvcid": "45958" 00:10:56.204 }, 00:10:56.204 "auth": { 00:10:56.204 "state": "completed", 00:10:56.204 "digest": "sha256", 00:10:56.204 "dhgroup": "null" 00:10:56.204 } 00:10:56.204 } 00:10:56.204 ]' 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.204 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.463 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:10:56.464 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:57.400 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.659 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.918 00:10:57.918 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.918 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.918 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.486 { 00:10:58.486 "cntlid": 7, 00:10:58.486 "qid": 0, 00:10:58.486 "state": "enabled", 00:10:58.486 "thread": "nvmf_tgt_poll_group_000", 00:10:58.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:10:58.486 "listen_address": { 00:10:58.486 "trtype": "TCP", 00:10:58.486 "adrfam": "IPv4", 00:10:58.486 "traddr": "10.0.0.3", 00:10:58.486 "trsvcid": "4420" 00:10:58.486 }, 00:10:58.486 "peer_address": { 00:10:58.486 "trtype": "TCP", 00:10:58.486 "adrfam": "IPv4", 00:10:58.486 "traddr": "10.0.0.1", 00:10:58.486 "trsvcid": "45990" 00:10:58.486 }, 00:10:58.486 "auth": { 00:10:58.486 "state": "completed", 00:10:58.486 "digest": "sha256", 00:10:58.486 "dhgroup": "null" 00:10:58.486 } 00:10:58.486 } 00:10:58.486 ]' 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.486 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.487 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.487 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.746 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:10:58.746 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:59.687 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.946 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.234 00:11:00.234 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.234 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.234 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.492 { 00:11:00.492 "cntlid": 9, 00:11:00.492 "qid": 0, 00:11:00.492 "state": "enabled", 00:11:00.492 "thread": "nvmf_tgt_poll_group_000", 00:11:00.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:00.492 "listen_address": { 00:11:00.492 "trtype": "TCP", 00:11:00.492 "adrfam": "IPv4", 00:11:00.492 "traddr": "10.0.0.3", 00:11:00.492 "trsvcid": "4420" 00:11:00.492 }, 00:11:00.492 "peer_address": { 00:11:00.492 "trtype": "TCP", 00:11:00.492 "adrfam": "IPv4", 00:11:00.492 "traddr": "10.0.0.1", 00:11:00.492 "trsvcid": "51404" 00:11:00.492 }, 00:11:00.492 "auth": { 00:11:00.492 "state": "completed", 00:11:00.492 "digest": "sha256", 00:11:00.492 "dhgroup": "ffdhe2048" 00:11:00.492 } 00:11:00.492 } 00:11:00.492 ]' 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.492 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.749 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.749 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.749 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.007 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:01.007 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.830 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.396 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.396 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.654 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.654 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.654 { 00:11:02.654 "cntlid": 11, 00:11:02.654 "qid": 0, 00:11:02.654 "state": "enabled", 00:11:02.654 "thread": "nvmf_tgt_poll_group_000", 00:11:02.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:02.654 "listen_address": { 00:11:02.654 "trtype": "TCP", 00:11:02.654 "adrfam": "IPv4", 00:11:02.654 "traddr": "10.0.0.3", 00:11:02.654 "trsvcid": "4420" 00:11:02.654 }, 00:11:02.654 "peer_address": { 00:11:02.654 "trtype": "TCP", 00:11:02.654 "adrfam": "IPv4", 00:11:02.654 "traddr": "10.0.0.1", 00:11:02.654 "trsvcid": "51414" 00:11:02.654 }, 00:11:02.654 "auth": { 00:11:02.654 "state": "completed", 00:11:02.654 "digest": "sha256", 00:11:02.654 "dhgroup": "ffdhe2048" 00:11:02.654 } 00:11:02.654 } 00:11:02.654 ]' 00:11:02.654 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.654 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.654 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.654 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:02.654 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.654 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.654 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.654 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.912 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:02.913 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:03.846 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.104 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.363 00:11:04.363 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.363 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.363 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.931 { 00:11:04.931 "cntlid": 13, 00:11:04.931 "qid": 0, 00:11:04.931 "state": "enabled", 00:11:04.931 "thread": "nvmf_tgt_poll_group_000", 00:11:04.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:04.931 "listen_address": { 00:11:04.931 "trtype": "TCP", 00:11:04.931 "adrfam": "IPv4", 00:11:04.931 "traddr": "10.0.0.3", 00:11:04.931 "trsvcid": "4420" 00:11:04.931 }, 00:11:04.931 "peer_address": { 00:11:04.931 "trtype": "TCP", 00:11:04.931 "adrfam": "IPv4", 00:11:04.931 "traddr": "10.0.0.1", 00:11:04.931 "trsvcid": "51446" 00:11:04.931 }, 00:11:04.931 "auth": { 00:11:04.931 "state": "completed", 00:11:04.931 "digest": "sha256", 00:11:04.931 "dhgroup": "ffdhe2048" 00:11:04.931 } 00:11:04.931 } 00:11:04.931 ]' 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.931 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.190 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:05.190 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.126 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.385 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:06.385 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.386 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.645 00:11:06.645 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.645 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.645 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.213 { 00:11:07.213 "cntlid": 15, 00:11:07.213 "qid": 0, 00:11:07.213 "state": "enabled", 00:11:07.213 "thread": "nvmf_tgt_poll_group_000", 00:11:07.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:07.213 "listen_address": { 00:11:07.213 "trtype": "TCP", 00:11:07.213 "adrfam": "IPv4", 00:11:07.213 "traddr": "10.0.0.3", 00:11:07.213 "trsvcid": "4420" 00:11:07.213 }, 00:11:07.213 "peer_address": { 00:11:07.213 "trtype": "TCP", 00:11:07.213 "adrfam": "IPv4", 00:11:07.213 "traddr": "10.0.0.1", 00:11:07.213 "trsvcid": "51460" 00:11:07.213 }, 00:11:07.213 "auth": { 00:11:07.213 "state": "completed", 00:11:07.213 "digest": "sha256", 00:11:07.213 "dhgroup": "ffdhe2048" 00:11:07.213 } 00:11:07.213 } 00:11:07.213 ]' 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.213 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.472 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:07.472 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.468 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.035 00:11:09.035 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.035 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.035 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.294 { 00:11:09.294 "cntlid": 17, 00:11:09.294 "qid": 0, 00:11:09.294 "state": "enabled", 00:11:09.294 "thread": "nvmf_tgt_poll_group_000", 00:11:09.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:09.294 "listen_address": { 00:11:09.294 "trtype": "TCP", 00:11:09.294 "adrfam": "IPv4", 00:11:09.294 "traddr": "10.0.0.3", 00:11:09.294 "trsvcid": "4420" 00:11:09.294 }, 00:11:09.294 "peer_address": { 00:11:09.294 "trtype": "TCP", 00:11:09.294 "adrfam": "IPv4", 00:11:09.294 "traddr": "10.0.0.1", 00:11:09.294 "trsvcid": "51494" 00:11:09.294 }, 00:11:09.294 "auth": { 00:11:09.294 "state": "completed", 00:11:09.294 "digest": "sha256", 00:11:09.294 "dhgroup": "ffdhe3072" 00:11:09.294 } 00:11:09.294 } 00:11:09.294 ]' 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.294 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.553 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.553 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.553 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.811 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:09.811 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.377 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.635 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.636 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.202 00:11:11.202 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.202 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.202 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.460 { 00:11:11.460 "cntlid": 19, 00:11:11.460 "qid": 0, 00:11:11.460 "state": "enabled", 00:11:11.460 "thread": "nvmf_tgt_poll_group_000", 00:11:11.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:11.460 "listen_address": { 00:11:11.460 "trtype": "TCP", 00:11:11.460 "adrfam": "IPv4", 00:11:11.460 "traddr": "10.0.0.3", 00:11:11.460 "trsvcid": "4420" 00:11:11.460 }, 00:11:11.460 "peer_address": { 00:11:11.460 "trtype": "TCP", 00:11:11.460 "adrfam": "IPv4", 00:11:11.460 "traddr": "10.0.0.1", 00:11:11.460 "trsvcid": "59346" 00:11:11.460 }, 00:11:11.460 "auth": { 00:11:11.460 "state": "completed", 00:11:11.460 "digest": "sha256", 00:11:11.460 "dhgroup": "ffdhe3072" 00:11:11.460 } 00:11:11.460 } 00:11:11.460 ]' 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.460 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.461 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.461 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.461 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.461 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.719 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:11.719 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.654 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.912 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.170 00:11:13.170 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.170 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.170 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.429 { 00:11:13.429 "cntlid": 21, 00:11:13.429 "qid": 0, 00:11:13.429 "state": "enabled", 00:11:13.429 "thread": "nvmf_tgt_poll_group_000", 00:11:13.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:13.429 "listen_address": { 00:11:13.429 "trtype": "TCP", 00:11:13.429 "adrfam": "IPv4", 00:11:13.429 "traddr": "10.0.0.3", 00:11:13.429 "trsvcid": "4420" 00:11:13.429 }, 00:11:13.429 "peer_address": { 00:11:13.429 "trtype": "TCP", 00:11:13.429 "adrfam": "IPv4", 00:11:13.429 "traddr": "10.0.0.1", 00:11:13.429 "trsvcid": "59366" 00:11:13.429 }, 00:11:13.429 "auth": { 00:11:13.429 "state": "completed", 00:11:13.429 "digest": "sha256", 00:11:13.429 "dhgroup": "ffdhe3072" 00:11:13.429 } 00:11:13.429 } 00:11:13.429 ]' 00:11:13.429 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.687 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.687 12:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.687 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:13.687 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.687 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.687 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.687 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.944 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:13.944 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.880 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.139 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.398 00:11:15.398 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.398 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.398 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.657 { 00:11:15.657 "cntlid": 23, 00:11:15.657 "qid": 0, 00:11:15.657 "state": "enabled", 00:11:15.657 "thread": "nvmf_tgt_poll_group_000", 00:11:15.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:15.657 "listen_address": { 00:11:15.657 "trtype": "TCP", 00:11:15.657 "adrfam": "IPv4", 00:11:15.657 "traddr": "10.0.0.3", 00:11:15.657 "trsvcid": "4420" 00:11:15.657 }, 00:11:15.657 "peer_address": { 00:11:15.657 "trtype": "TCP", 00:11:15.657 "adrfam": "IPv4", 00:11:15.657 "traddr": "10.0.0.1", 00:11:15.657 "trsvcid": "59388" 00:11:15.657 }, 00:11:15.657 "auth": { 00:11:15.657 "state": "completed", 00:11:15.657 "digest": "sha256", 00:11:15.657 "dhgroup": "ffdhe3072" 00:11:15.657 } 00:11:15.657 } 00:11:15.657 ]' 00:11:15.657 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.927 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.215 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:16.215 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.783 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.041 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.607 00:11:17.607 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.607 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.607 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.866 { 00:11:17.866 "cntlid": 25, 00:11:17.866 "qid": 0, 00:11:17.866 "state": "enabled", 00:11:17.866 "thread": "nvmf_tgt_poll_group_000", 00:11:17.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:17.866 "listen_address": { 00:11:17.866 "trtype": "TCP", 00:11:17.866 "adrfam": "IPv4", 00:11:17.866 "traddr": "10.0.0.3", 00:11:17.866 "trsvcid": "4420" 00:11:17.866 }, 00:11:17.866 "peer_address": { 00:11:17.866 "trtype": "TCP", 00:11:17.866 "adrfam": "IPv4", 00:11:17.866 "traddr": "10.0.0.1", 00:11:17.866 "trsvcid": "59414" 00:11:17.866 }, 00:11:17.866 "auth": { 00:11:17.866 "state": "completed", 00:11:17.866 "digest": "sha256", 00:11:17.866 "dhgroup": "ffdhe4096" 00:11:17.866 } 00:11:17.866 } 00:11:17.866 ]' 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.866 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.432 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:18.432 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.998 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.255 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.822 00:11:19.822 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.822 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.822 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.080 { 00:11:20.080 "cntlid": 27, 00:11:20.080 "qid": 0, 00:11:20.080 "state": "enabled", 00:11:20.080 "thread": "nvmf_tgt_poll_group_000", 00:11:20.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:20.080 "listen_address": { 00:11:20.080 "trtype": "TCP", 00:11:20.080 "adrfam": "IPv4", 00:11:20.080 "traddr": "10.0.0.3", 00:11:20.080 "trsvcid": "4420" 00:11:20.080 }, 00:11:20.080 "peer_address": { 00:11:20.080 "trtype": "TCP", 00:11:20.080 "adrfam": "IPv4", 00:11:20.080 "traddr": "10.0.0.1", 00:11:20.080 "trsvcid": "59448" 00:11:20.080 }, 00:11:20.080 "auth": { 00:11:20.080 "state": "completed", 00:11:20.080 "digest": "sha256", 00:11:20.080 "dhgroup": "ffdhe4096" 00:11:20.080 } 00:11:20.080 } 00:11:20.080 ]' 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.080 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.339 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.339 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.339 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.597 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:20.597 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:21.166 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.425 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.993 00:11:21.993 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.993 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.993 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.252 { 00:11:22.252 "cntlid": 29, 00:11:22.252 "qid": 0, 00:11:22.252 "state": "enabled", 00:11:22.252 "thread": "nvmf_tgt_poll_group_000", 00:11:22.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:22.252 "listen_address": { 00:11:22.252 "trtype": "TCP", 00:11:22.252 "adrfam": "IPv4", 00:11:22.252 "traddr": "10.0.0.3", 00:11:22.252 "trsvcid": "4420" 00:11:22.252 }, 00:11:22.252 "peer_address": { 00:11:22.252 "trtype": "TCP", 00:11:22.252 "adrfam": "IPv4", 00:11:22.252 "traddr": "10.0.0.1", 00:11:22.252 "trsvcid": "45500" 00:11:22.252 }, 00:11:22.252 "auth": { 00:11:22.252 "state": "completed", 00:11:22.252 "digest": "sha256", 00:11:22.252 "dhgroup": "ffdhe4096" 00:11:22.252 } 00:11:22.252 } 00:11:22.252 ]' 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.252 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.511 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:22.511 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:23.447 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:23.448 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.712 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.999 00:11:23.999 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.999 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.999 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.270 { 00:11:24.270 "cntlid": 31, 00:11:24.270 "qid": 0, 00:11:24.270 "state": "enabled", 00:11:24.270 "thread": "nvmf_tgt_poll_group_000", 00:11:24.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:24.270 "listen_address": { 00:11:24.270 "trtype": "TCP", 00:11:24.270 "adrfam": "IPv4", 00:11:24.270 "traddr": "10.0.0.3", 00:11:24.270 "trsvcid": "4420" 00:11:24.270 }, 00:11:24.270 "peer_address": { 00:11:24.270 "trtype": "TCP", 00:11:24.270 "adrfam": "IPv4", 00:11:24.270 "traddr": "10.0.0.1", 00:11:24.270 "trsvcid": "45532" 00:11:24.270 }, 00:11:24.270 "auth": { 00:11:24.270 "state": "completed", 00:11:24.270 "digest": "sha256", 00:11:24.270 "dhgroup": "ffdhe4096" 00:11:24.270 } 00:11:24.270 } 00:11:24.270 ]' 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.270 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.529 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.529 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.529 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.529 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.529 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.788 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:24.788 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:25.724 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:25.724 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.725 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.292 00:11:26.292 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.292 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.292 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.551 { 00:11:26.551 "cntlid": 33, 00:11:26.551 "qid": 0, 00:11:26.551 "state": "enabled", 00:11:26.551 "thread": "nvmf_tgt_poll_group_000", 00:11:26.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:26.551 "listen_address": { 00:11:26.551 "trtype": "TCP", 00:11:26.551 "adrfam": "IPv4", 00:11:26.551 "traddr": "10.0.0.3", 00:11:26.551 "trsvcid": "4420" 00:11:26.551 }, 00:11:26.551 "peer_address": { 00:11:26.551 "trtype": "TCP", 00:11:26.551 "adrfam": "IPv4", 00:11:26.551 "traddr": "10.0.0.1", 00:11:26.551 "trsvcid": "45558" 00:11:26.551 }, 00:11:26.551 "auth": { 00:11:26.551 "state": "completed", 00:11:26.551 "digest": "sha256", 00:11:26.551 "dhgroup": "ffdhe6144" 00:11:26.551 } 00:11:26.551 } 00:11:26.551 ]' 00:11:26.551 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.811 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.069 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:27.069 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:27.636 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.895 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.154 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.155 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.722 00:11:28.722 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.722 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.722 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.981 { 00:11:28.981 "cntlid": 35, 00:11:28.981 "qid": 0, 00:11:28.981 "state": "enabled", 00:11:28.981 "thread": "nvmf_tgt_poll_group_000", 00:11:28.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:28.981 "listen_address": { 00:11:28.981 "trtype": "TCP", 00:11:28.981 "adrfam": "IPv4", 00:11:28.981 "traddr": "10.0.0.3", 00:11:28.981 "trsvcid": "4420" 00:11:28.981 }, 00:11:28.981 "peer_address": { 00:11:28.981 "trtype": "TCP", 00:11:28.981 "adrfam": "IPv4", 00:11:28.981 "traddr": "10.0.0.1", 00:11:28.981 "trsvcid": "45588" 00:11:28.981 }, 00:11:28.981 "auth": { 00:11:28.981 "state": "completed", 00:11:28.981 "digest": "sha256", 00:11:28.981 "dhgroup": "ffdhe6144" 00:11:28.981 } 00:11:28.981 } 00:11:28.981 ]' 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.981 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.982 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.982 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.982 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.549 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:29.549 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:30.116 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:30.117 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.375 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.942 00:11:30.942 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.942 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.942 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.201 { 00:11:31.201 "cntlid": 37, 00:11:31.201 "qid": 0, 00:11:31.201 "state": "enabled", 00:11:31.201 "thread": "nvmf_tgt_poll_group_000", 00:11:31.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:31.201 "listen_address": { 00:11:31.201 "trtype": "TCP", 00:11:31.201 "adrfam": "IPv4", 00:11:31.201 "traddr": "10.0.0.3", 00:11:31.201 "trsvcid": "4420" 00:11:31.201 }, 00:11:31.201 "peer_address": { 00:11:31.201 "trtype": "TCP", 00:11:31.201 "adrfam": "IPv4", 00:11:31.201 "traddr": "10.0.0.1", 00:11:31.201 "trsvcid": "59468" 00:11:31.201 }, 00:11:31.201 "auth": { 00:11:31.201 "state": "completed", 00:11:31.201 "digest": "sha256", 00:11:31.201 "dhgroup": "ffdhe6144" 00:11:31.201 } 00:11:31.201 } 00:11:31.201 ]' 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.201 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.460 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.460 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.460 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.755 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:31.755 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:32.323 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.890 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.148 00:11:33.406 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.406 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.406 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.664 { 00:11:33.664 "cntlid": 39, 00:11:33.664 "qid": 0, 00:11:33.664 "state": "enabled", 00:11:33.664 "thread": "nvmf_tgt_poll_group_000", 00:11:33.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:33.664 "listen_address": { 00:11:33.664 "trtype": "TCP", 00:11:33.664 "adrfam": "IPv4", 00:11:33.664 "traddr": "10.0.0.3", 00:11:33.664 "trsvcid": "4420" 00:11:33.664 }, 00:11:33.664 "peer_address": { 00:11:33.664 "trtype": "TCP", 00:11:33.664 "adrfam": "IPv4", 00:11:33.664 "traddr": "10.0.0.1", 00:11:33.664 "trsvcid": "59490" 00:11:33.664 }, 00:11:33.664 "auth": { 00:11:33.664 "state": "completed", 00:11:33.664 "digest": "sha256", 00:11:33.664 "dhgroup": "ffdhe6144" 00:11:33.664 } 00:11:33.664 } 00:11:33.664 ]' 00:11:33.664 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.665 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.230 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:34.230 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:34.797 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.056 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.315 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.315 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.315 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.315 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.882 00:11:35.882 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.882 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.882 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.142 { 00:11:36.142 "cntlid": 41, 00:11:36.142 "qid": 0, 00:11:36.142 "state": "enabled", 00:11:36.142 "thread": "nvmf_tgt_poll_group_000", 00:11:36.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:36.142 "listen_address": { 00:11:36.142 "trtype": "TCP", 00:11:36.142 "adrfam": "IPv4", 00:11:36.142 "traddr": "10.0.0.3", 00:11:36.142 "trsvcid": "4420" 00:11:36.142 }, 00:11:36.142 "peer_address": { 00:11:36.142 "trtype": "TCP", 00:11:36.142 "adrfam": "IPv4", 00:11:36.142 "traddr": "10.0.0.1", 00:11:36.142 "trsvcid": "59522" 00:11:36.142 }, 00:11:36.142 "auth": { 00:11:36.142 "state": "completed", 00:11:36.142 "digest": "sha256", 00:11:36.142 "dhgroup": "ffdhe8192" 00:11:36.142 } 00:11:36.142 } 00:11:36.142 ]' 00:11:36.142 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.401 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.660 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:36.660 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.596 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.855 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.422 00:11:38.422 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.422 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.422 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.681 { 00:11:38.681 "cntlid": 43, 00:11:38.681 "qid": 0, 00:11:38.681 "state": "enabled", 00:11:38.681 "thread": "nvmf_tgt_poll_group_000", 00:11:38.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:38.681 "listen_address": { 00:11:38.681 "trtype": "TCP", 00:11:38.681 "adrfam": "IPv4", 00:11:38.681 "traddr": "10.0.0.3", 00:11:38.681 "trsvcid": "4420" 00:11:38.681 }, 00:11:38.681 "peer_address": { 00:11:38.681 "trtype": "TCP", 00:11:38.681 "adrfam": "IPv4", 00:11:38.681 "traddr": "10.0.0.1", 00:11:38.681 "trsvcid": "59544" 00:11:38.681 }, 00:11:38.681 "auth": { 00:11:38.681 "state": "completed", 00:11:38.681 "digest": "sha256", 00:11:38.681 "dhgroup": "ffdhe8192" 00:11:38.681 } 00:11:38.681 } 00:11:38.681 ]' 00:11:38.681 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.940 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.199 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:39.199 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.766 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.335 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.903 00:11:40.903 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.903 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.903 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.162 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.163 { 00:11:41.163 "cntlid": 45, 00:11:41.163 "qid": 0, 00:11:41.163 "state": "enabled", 00:11:41.163 "thread": "nvmf_tgt_poll_group_000", 00:11:41.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:41.163 "listen_address": { 00:11:41.163 "trtype": "TCP", 00:11:41.163 "adrfam": "IPv4", 00:11:41.163 "traddr": "10.0.0.3", 00:11:41.163 "trsvcid": "4420" 00:11:41.163 }, 00:11:41.163 "peer_address": { 00:11:41.163 "trtype": "TCP", 00:11:41.163 "adrfam": "IPv4", 00:11:41.163 "traddr": "10.0.0.1", 00:11:41.163 "trsvcid": "58530" 00:11:41.163 }, 00:11:41.163 "auth": { 00:11:41.163 "state": "completed", 00:11:41.163 "digest": "sha256", 00:11:41.163 "dhgroup": "ffdhe8192" 00:11:41.163 } 00:11:41.163 } 00:11:41.163 ]' 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.163 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.422 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:41.422 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.422 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.422 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.422 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.681 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:41.681 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.249 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.508 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:42.508 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.508 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.509 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.175 00:11:43.175 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.175 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.175 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.456 { 00:11:43.456 "cntlid": 47, 00:11:43.456 "qid": 0, 00:11:43.456 "state": "enabled", 00:11:43.456 "thread": "nvmf_tgt_poll_group_000", 00:11:43.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:43.456 "listen_address": { 00:11:43.456 "trtype": "TCP", 00:11:43.456 "adrfam": "IPv4", 00:11:43.456 "traddr": "10.0.0.3", 00:11:43.456 "trsvcid": "4420" 00:11:43.456 }, 00:11:43.456 "peer_address": { 00:11:43.456 "trtype": "TCP", 00:11:43.456 "adrfam": "IPv4", 00:11:43.456 "traddr": "10.0.0.1", 00:11:43.456 "trsvcid": "58562" 00:11:43.456 }, 00:11:43.456 "auth": { 00:11:43.456 "state": "completed", 00:11:43.456 "digest": "sha256", 00:11:43.456 "dhgroup": "ffdhe8192" 00:11:43.456 } 00:11:43.456 } 00:11:43.456 ]' 00:11:43.456 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.715 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.715 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.715 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.715 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.715 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.715 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.715 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.974 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:43.974 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.912 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.172 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.172 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.173 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.173 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.432 00:11:45.432 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.432 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.432 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.692 { 00:11:45.692 "cntlid": 49, 00:11:45.692 "qid": 0, 00:11:45.692 "state": "enabled", 00:11:45.692 "thread": "nvmf_tgt_poll_group_000", 00:11:45.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:45.692 "listen_address": { 00:11:45.692 "trtype": "TCP", 00:11:45.692 "adrfam": "IPv4", 00:11:45.692 "traddr": "10.0.0.3", 00:11:45.692 "trsvcid": "4420" 00:11:45.692 }, 00:11:45.692 "peer_address": { 00:11:45.692 "trtype": "TCP", 00:11:45.692 "adrfam": "IPv4", 00:11:45.692 "traddr": "10.0.0.1", 00:11:45.692 "trsvcid": "58578" 00:11:45.692 }, 00:11:45.692 "auth": { 00:11:45.692 "state": "completed", 00:11:45.692 "digest": "sha384", 00:11:45.692 "dhgroup": "null" 00:11:45.692 } 00:11:45.692 } 00:11:45.692 ]' 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:45.692 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.951 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.951 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.951 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.210 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:46.210 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.778 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.037 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.296 00:11:47.296 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.296 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.296 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.863 { 00:11:47.863 "cntlid": 51, 00:11:47.863 "qid": 0, 00:11:47.863 "state": "enabled", 00:11:47.863 "thread": "nvmf_tgt_poll_group_000", 00:11:47.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:47.863 "listen_address": { 00:11:47.863 "trtype": "TCP", 00:11:47.863 "adrfam": "IPv4", 00:11:47.863 "traddr": "10.0.0.3", 00:11:47.863 "trsvcid": "4420" 00:11:47.863 }, 00:11:47.863 "peer_address": { 00:11:47.863 "trtype": "TCP", 00:11:47.863 "adrfam": "IPv4", 00:11:47.863 "traddr": "10.0.0.1", 00:11:47.863 "trsvcid": "58598" 00:11:47.863 }, 00:11:47.863 "auth": { 00:11:47.863 "state": "completed", 00:11:47.863 "digest": "sha384", 00:11:47.863 "dhgroup": "null" 00:11:47.863 } 00:11:47.863 } 00:11:47.863 ]' 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.863 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.122 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:48.122 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.688 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.945 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.510 00:11:49.510 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.510 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.510 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.768 { 00:11:49.768 "cntlid": 53, 00:11:49.768 "qid": 0, 00:11:49.768 "state": "enabled", 00:11:49.768 "thread": "nvmf_tgt_poll_group_000", 00:11:49.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:49.768 "listen_address": { 00:11:49.768 "trtype": "TCP", 00:11:49.768 "adrfam": "IPv4", 00:11:49.768 "traddr": "10.0.0.3", 00:11:49.768 "trsvcid": "4420" 00:11:49.768 }, 00:11:49.768 "peer_address": { 00:11:49.768 "trtype": "TCP", 00:11:49.768 "adrfam": "IPv4", 00:11:49.768 "traddr": "10.0.0.1", 00:11:49.768 "trsvcid": "58614" 00:11:49.768 }, 00:11:49.768 "auth": { 00:11:49.768 "state": "completed", 00:11:49.768 "digest": "sha384", 00:11:49.768 "dhgroup": "null" 00:11:49.768 } 00:11:49.768 } 00:11:49.768 ]' 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.768 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.026 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:50.026 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.959 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.526 00:11:51.526 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.526 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.526 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.784 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.784 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.784 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.784 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.784 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.785 { 00:11:51.785 "cntlid": 55, 00:11:51.785 "qid": 0, 00:11:51.785 "state": "enabled", 00:11:51.785 "thread": "nvmf_tgt_poll_group_000", 00:11:51.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:51.785 "listen_address": { 00:11:51.785 "trtype": "TCP", 00:11:51.785 "adrfam": "IPv4", 00:11:51.785 "traddr": "10.0.0.3", 00:11:51.785 "trsvcid": "4420" 00:11:51.785 }, 00:11:51.785 "peer_address": { 00:11:51.785 "trtype": "TCP", 00:11:51.785 "adrfam": "IPv4", 00:11:51.785 "traddr": "10.0.0.1", 00:11:51.785 "trsvcid": "53790" 00:11:51.785 }, 00:11:51.785 "auth": { 00:11:51.785 "state": "completed", 00:11:51.785 "digest": "sha384", 00:11:51.785 "dhgroup": "null" 00:11:51.785 } 00:11:51.785 } 00:11:51.785 ]' 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.785 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.044 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:52.044 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.979 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.239 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.498 00:11:53.498 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.498 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.498 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.758 { 00:11:53.758 "cntlid": 57, 00:11:53.758 "qid": 0, 00:11:53.758 "state": "enabled", 00:11:53.758 "thread": "nvmf_tgt_poll_group_000", 00:11:53.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:53.758 "listen_address": { 00:11:53.758 "trtype": "TCP", 00:11:53.758 "adrfam": "IPv4", 00:11:53.758 "traddr": "10.0.0.3", 00:11:53.758 "trsvcid": "4420" 00:11:53.758 }, 00:11:53.758 "peer_address": { 00:11:53.758 "trtype": "TCP", 00:11:53.758 "adrfam": "IPv4", 00:11:53.758 "traddr": "10.0.0.1", 00:11:53.758 "trsvcid": "53808" 00:11:53.758 }, 00:11:53.758 "auth": { 00:11:53.758 "state": "completed", 00:11:53.758 "digest": "sha384", 00:11:53.758 "dhgroup": "ffdhe2048" 00:11:53.758 } 00:11:53.758 } 00:11:53.758 ]' 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.758 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.015 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.273 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:54.273 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:11:54.840 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.840 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:54.840 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.840 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.099 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.099 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.099 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.099 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.368 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.636 00:11:55.637 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.637 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.637 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.895 { 00:11:55.895 "cntlid": 59, 00:11:55.895 "qid": 0, 00:11:55.895 "state": "enabled", 00:11:55.895 "thread": "nvmf_tgt_poll_group_000", 00:11:55.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:55.895 "listen_address": { 00:11:55.895 "trtype": "TCP", 00:11:55.895 "adrfam": "IPv4", 00:11:55.895 "traddr": "10.0.0.3", 00:11:55.895 "trsvcid": "4420" 00:11:55.895 }, 00:11:55.895 "peer_address": { 00:11:55.895 "trtype": "TCP", 00:11:55.895 "adrfam": "IPv4", 00:11:55.895 "traddr": "10.0.0.1", 00:11:55.895 "trsvcid": "53830" 00:11:55.895 }, 00:11:55.895 "auth": { 00:11:55.895 "state": "completed", 00:11:55.895 "digest": "sha384", 00:11:55.895 "dhgroup": "ffdhe2048" 00:11:55.895 } 00:11:55.895 } 00:11:55.895 ]' 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.895 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.154 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.154 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.154 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.154 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.154 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.412 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:56.412 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:56.977 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.236 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.803 00:11:57.803 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.803 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.803 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.062 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.062 { 00:11:58.062 "cntlid": 61, 00:11:58.062 "qid": 0, 00:11:58.062 "state": "enabled", 00:11:58.062 "thread": "nvmf_tgt_poll_group_000", 00:11:58.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:11:58.062 "listen_address": { 00:11:58.062 "trtype": "TCP", 00:11:58.062 "adrfam": "IPv4", 00:11:58.062 "traddr": "10.0.0.3", 00:11:58.062 "trsvcid": "4420" 00:11:58.062 }, 00:11:58.062 "peer_address": { 00:11:58.062 "trtype": "TCP", 00:11:58.062 "adrfam": "IPv4", 00:11:58.063 "traddr": "10.0.0.1", 00:11:58.063 "trsvcid": "53866" 00:11:58.063 }, 00:11:58.063 "auth": { 00:11:58.063 "state": "completed", 00:11:58.063 "digest": "sha384", 00:11:58.063 "dhgroup": "ffdhe2048" 00:11:58.063 } 00:11:58.063 } 00:11:58.063 ]' 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.063 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.321 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:58.322 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.258 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.518 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.777 00:11:59.777 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.777 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.777 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.037 { 00:12:00.037 "cntlid": 63, 00:12:00.037 "qid": 0, 00:12:00.037 "state": "enabled", 00:12:00.037 "thread": "nvmf_tgt_poll_group_000", 00:12:00.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:00.037 "listen_address": { 00:12:00.037 "trtype": "TCP", 00:12:00.037 "adrfam": "IPv4", 00:12:00.037 "traddr": "10.0.0.3", 00:12:00.037 "trsvcid": "4420" 00:12:00.037 }, 00:12:00.037 "peer_address": { 00:12:00.037 "trtype": "TCP", 00:12:00.037 "adrfam": "IPv4", 00:12:00.037 "traddr": "10.0.0.1", 00:12:00.037 "trsvcid": "53900" 00:12:00.037 }, 00:12:00.037 "auth": { 00:12:00.037 "state": "completed", 00:12:00.037 "digest": "sha384", 00:12:00.037 "dhgroup": "ffdhe2048" 00:12:00.037 } 00:12:00.037 } 00:12:00.037 ]' 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.037 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.296 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.296 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.296 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.296 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.296 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.555 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:00.555 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.122 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:01.123 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.690 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.949 00:12:01.949 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.949 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.949 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.207 { 00:12:02.207 "cntlid": 65, 00:12:02.207 "qid": 0, 00:12:02.207 "state": "enabled", 00:12:02.207 "thread": "nvmf_tgt_poll_group_000", 00:12:02.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:02.207 "listen_address": { 00:12:02.207 "trtype": "TCP", 00:12:02.207 "adrfam": "IPv4", 00:12:02.207 "traddr": "10.0.0.3", 00:12:02.207 "trsvcid": "4420" 00:12:02.207 }, 00:12:02.207 "peer_address": { 00:12:02.207 "trtype": "TCP", 00:12:02.207 "adrfam": "IPv4", 00:12:02.207 "traddr": "10.0.0.1", 00:12:02.207 "trsvcid": "55202" 00:12:02.207 }, 00:12:02.207 "auth": { 00:12:02.207 "state": "completed", 00:12:02.207 "digest": "sha384", 00:12:02.207 "dhgroup": "ffdhe3072" 00:12:02.207 } 00:12:02.207 } 00:12:02.207 ]' 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.207 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.467 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.467 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.467 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.467 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.467 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.726 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:02.726 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.662 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.662 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.921 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.921 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.921 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.921 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.180 00:12:04.180 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.180 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.180 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.439 { 00:12:04.439 "cntlid": 67, 00:12:04.439 "qid": 0, 00:12:04.439 "state": "enabled", 00:12:04.439 "thread": "nvmf_tgt_poll_group_000", 00:12:04.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:04.439 "listen_address": { 00:12:04.439 "trtype": "TCP", 00:12:04.439 "adrfam": "IPv4", 00:12:04.439 "traddr": "10.0.0.3", 00:12:04.439 "trsvcid": "4420" 00:12:04.439 }, 00:12:04.439 "peer_address": { 00:12:04.439 "trtype": "TCP", 00:12:04.439 "adrfam": "IPv4", 00:12:04.439 "traddr": "10.0.0.1", 00:12:04.439 "trsvcid": "55232" 00:12:04.439 }, 00:12:04.439 "auth": { 00:12:04.439 "state": "completed", 00:12:04.439 "digest": "sha384", 00:12:04.439 "dhgroup": "ffdhe3072" 00:12:04.439 } 00:12:04.439 } 00:12:04.439 ]' 00:12:04.439 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.698 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.698 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.698 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.698 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.698 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.698 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.698 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.957 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:04.957 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.901 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.469 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.469 { 00:12:06.469 "cntlid": 69, 00:12:06.469 "qid": 0, 00:12:06.469 "state": "enabled", 00:12:06.469 "thread": "nvmf_tgt_poll_group_000", 00:12:06.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:06.469 "listen_address": { 00:12:06.469 "trtype": "TCP", 00:12:06.469 "adrfam": "IPv4", 00:12:06.469 "traddr": "10.0.0.3", 00:12:06.469 "trsvcid": "4420" 00:12:06.469 }, 00:12:06.469 "peer_address": { 00:12:06.469 "trtype": "TCP", 00:12:06.469 "adrfam": "IPv4", 00:12:06.469 "traddr": "10.0.0.1", 00:12:06.469 "trsvcid": "55246" 00:12:06.469 }, 00:12:06.469 "auth": { 00:12:06.469 "state": "completed", 00:12:06.469 "digest": "sha384", 00:12:06.469 "dhgroup": "ffdhe3072" 00:12:06.469 } 00:12:06.469 } 00:12:06.469 ]' 00:12:06.469 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.728 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.728 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.728 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.729 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.729 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.729 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.729 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.988 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:06.988 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:07.577 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.843 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.411 00:12:08.411 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.411 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.411 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.671 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.671 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.671 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.671 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.671 { 00:12:08.671 "cntlid": 71, 00:12:08.671 "qid": 0, 00:12:08.671 "state": "enabled", 00:12:08.671 "thread": "nvmf_tgt_poll_group_000", 00:12:08.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:08.671 "listen_address": { 00:12:08.671 "trtype": "TCP", 00:12:08.671 "adrfam": "IPv4", 00:12:08.671 "traddr": "10.0.0.3", 00:12:08.671 "trsvcid": "4420" 00:12:08.671 }, 00:12:08.671 "peer_address": { 00:12:08.671 "trtype": "TCP", 00:12:08.671 "adrfam": "IPv4", 00:12:08.671 "traddr": "10.0.0.1", 00:12:08.671 "trsvcid": "55254" 00:12:08.671 }, 00:12:08.671 "auth": { 00:12:08.671 "state": "completed", 00:12:08.671 "digest": "sha384", 00:12:08.671 "dhgroup": "ffdhe3072" 00:12:08.671 } 00:12:08.671 } 00:12:08.671 ]' 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.671 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.239 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:09.239 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:09.807 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.066 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.325 00:12:10.325 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.325 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.325 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.584 { 00:12:10.584 "cntlid": 73, 00:12:10.584 "qid": 0, 00:12:10.584 "state": "enabled", 00:12:10.584 "thread": "nvmf_tgt_poll_group_000", 00:12:10.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:10.584 "listen_address": { 00:12:10.584 "trtype": "TCP", 00:12:10.584 "adrfam": "IPv4", 00:12:10.584 "traddr": "10.0.0.3", 00:12:10.584 "trsvcid": "4420" 00:12:10.584 }, 00:12:10.584 "peer_address": { 00:12:10.584 "trtype": "TCP", 00:12:10.584 "adrfam": "IPv4", 00:12:10.584 "traddr": "10.0.0.1", 00:12:10.584 "trsvcid": "48362" 00:12:10.584 }, 00:12:10.584 "auth": { 00:12:10.584 "state": "completed", 00:12:10.584 "digest": "sha384", 00:12:10.584 "dhgroup": "ffdhe4096" 00:12:10.584 } 00:12:10.584 } 00:12:10.584 ]' 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.584 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.843 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.843 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.843 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.843 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.843 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.102 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:11.102 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:11.670 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.929 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.498 00:12:12.498 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.498 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.498 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.757 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.757 { 00:12:12.758 "cntlid": 75, 00:12:12.758 "qid": 0, 00:12:12.758 "state": "enabled", 00:12:12.758 "thread": "nvmf_tgt_poll_group_000", 00:12:12.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:12.758 "listen_address": { 00:12:12.758 "trtype": "TCP", 00:12:12.758 "adrfam": "IPv4", 00:12:12.758 "traddr": "10.0.0.3", 00:12:12.758 "trsvcid": "4420" 00:12:12.758 }, 00:12:12.758 "peer_address": { 00:12:12.758 "trtype": "TCP", 00:12:12.758 "adrfam": "IPv4", 00:12:12.758 "traddr": "10.0.0.1", 00:12:12.758 "trsvcid": "48406" 00:12:12.758 }, 00:12:12.758 "auth": { 00:12:12.758 "state": "completed", 00:12:12.758 "digest": "sha384", 00:12:12.758 "dhgroup": "ffdhe4096" 00:12:12.758 } 00:12:12.758 } 00:12:12.758 ]' 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.758 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.326 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:13.326 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:13.894 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.153 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.720 00:12:14.720 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.720 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.720 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.979 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.979 { 00:12:14.979 "cntlid": 77, 00:12:14.980 "qid": 0, 00:12:14.980 "state": "enabled", 00:12:14.980 "thread": "nvmf_tgt_poll_group_000", 00:12:14.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:14.980 "listen_address": { 00:12:14.980 "trtype": "TCP", 00:12:14.980 "adrfam": "IPv4", 00:12:14.980 "traddr": "10.0.0.3", 00:12:14.980 "trsvcid": "4420" 00:12:14.980 }, 00:12:14.980 "peer_address": { 00:12:14.980 "trtype": "TCP", 00:12:14.980 "adrfam": "IPv4", 00:12:14.980 "traddr": "10.0.0.1", 00:12:14.980 "trsvcid": "48436" 00:12:14.980 }, 00:12:14.980 "auth": { 00:12:14.980 "state": "completed", 00:12:14.980 "digest": "sha384", 00:12:14.980 "dhgroup": "ffdhe4096" 00:12:14.980 } 00:12:14.980 } 00:12:14.980 ]' 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.980 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.239 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:15.239 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.177 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.436 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.695 00:12:16.695 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.695 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.695 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.953 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.953 { 00:12:16.953 "cntlid": 79, 00:12:16.953 "qid": 0, 00:12:16.953 "state": "enabled", 00:12:16.953 "thread": "nvmf_tgt_poll_group_000", 00:12:16.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:16.953 "listen_address": { 00:12:16.953 "trtype": "TCP", 00:12:16.953 "adrfam": "IPv4", 00:12:16.953 "traddr": "10.0.0.3", 00:12:16.953 "trsvcid": "4420" 00:12:16.953 }, 00:12:16.953 "peer_address": { 00:12:16.953 "trtype": "TCP", 00:12:16.953 "adrfam": "IPv4", 00:12:16.953 "traddr": "10.0.0.1", 00:12:16.953 "trsvcid": "48468" 00:12:16.953 }, 00:12:16.953 "auth": { 00:12:16.953 "state": "completed", 00:12:16.953 "digest": "sha384", 00:12:16.953 "dhgroup": "ffdhe4096" 00:12:16.953 } 00:12:16.954 } 00:12:16.954 ]' 00:12:16.954 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.212 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.471 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:17.471 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.407 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.408 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.666 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:18.666 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.666 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.666 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:18.666 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.667 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.925 00:12:19.184 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.184 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.184 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.443 { 00:12:19.443 "cntlid": 81, 00:12:19.443 "qid": 0, 00:12:19.443 "state": "enabled", 00:12:19.443 "thread": "nvmf_tgt_poll_group_000", 00:12:19.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:19.443 "listen_address": { 00:12:19.443 "trtype": "TCP", 00:12:19.443 "adrfam": "IPv4", 00:12:19.443 "traddr": "10.0.0.3", 00:12:19.443 "trsvcid": "4420" 00:12:19.443 }, 00:12:19.443 "peer_address": { 00:12:19.443 "trtype": "TCP", 00:12:19.443 "adrfam": "IPv4", 00:12:19.443 "traddr": "10.0.0.1", 00:12:19.443 "trsvcid": "48476" 00:12:19.443 }, 00:12:19.443 "auth": { 00:12:19.443 "state": "completed", 00:12:19.443 "digest": "sha384", 00:12:19.443 "dhgroup": "ffdhe6144" 00:12:19.443 } 00:12:19.443 } 00:12:19.443 ]' 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.443 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.702 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:19.702 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.654 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.913 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.479 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.479 { 00:12:21.479 "cntlid": 83, 00:12:21.479 "qid": 0, 00:12:21.479 "state": "enabled", 00:12:21.479 "thread": "nvmf_tgt_poll_group_000", 00:12:21.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:21.479 "listen_address": { 00:12:21.479 "trtype": "TCP", 00:12:21.479 "adrfam": "IPv4", 00:12:21.479 "traddr": "10.0.0.3", 00:12:21.479 "trsvcid": "4420" 00:12:21.479 }, 00:12:21.479 "peer_address": { 00:12:21.479 "trtype": "TCP", 00:12:21.479 "adrfam": "IPv4", 00:12:21.479 "traddr": "10.0.0.1", 00:12:21.479 "trsvcid": "58970" 00:12:21.479 }, 00:12:21.479 "auth": { 00:12:21.479 "state": "completed", 00:12:21.479 "digest": "sha384", 00:12:21.479 "dhgroup": "ffdhe6144" 00:12:21.479 } 00:12:21.479 } 00:12:21.479 ]' 00:12:21.479 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.738 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.996 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:21.996 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:22.931 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.932 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.498 00:12:23.499 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.499 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.499 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.073 { 00:12:24.073 "cntlid": 85, 00:12:24.073 "qid": 0, 00:12:24.073 "state": "enabled", 00:12:24.073 "thread": "nvmf_tgt_poll_group_000", 00:12:24.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:24.073 "listen_address": { 00:12:24.073 "trtype": "TCP", 00:12:24.073 "adrfam": "IPv4", 00:12:24.073 "traddr": "10.0.0.3", 00:12:24.073 "trsvcid": "4420" 00:12:24.073 }, 00:12:24.073 "peer_address": { 00:12:24.073 "trtype": "TCP", 00:12:24.073 "adrfam": "IPv4", 00:12:24.073 "traddr": "10.0.0.1", 00:12:24.073 "trsvcid": "59008" 00:12:24.073 }, 00:12:24.073 "auth": { 00:12:24.073 "state": "completed", 00:12:24.073 "digest": "sha384", 00:12:24.073 "dhgroup": "ffdhe6144" 00:12:24.073 } 00:12:24.073 } 00:12:24.073 ]' 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.073 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.332 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:24.332 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.269 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.839 00:12:25.840 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.840 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.840 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.100 { 00:12:26.100 "cntlid": 87, 00:12:26.100 "qid": 0, 00:12:26.100 "state": "enabled", 00:12:26.100 "thread": "nvmf_tgt_poll_group_000", 00:12:26.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:26.100 "listen_address": { 00:12:26.100 "trtype": "TCP", 00:12:26.100 "adrfam": "IPv4", 00:12:26.100 "traddr": "10.0.0.3", 00:12:26.100 "trsvcid": "4420" 00:12:26.100 }, 00:12:26.100 "peer_address": { 00:12:26.100 "trtype": "TCP", 00:12:26.100 "adrfam": "IPv4", 00:12:26.100 "traddr": "10.0.0.1", 00:12:26.100 "trsvcid": "59026" 00:12:26.100 }, 00:12:26.100 "auth": { 00:12:26.100 "state": "completed", 00:12:26.100 "digest": "sha384", 00:12:26.100 "dhgroup": "ffdhe6144" 00:12:26.100 } 00:12:26.100 } 00:12:26.100 ]' 00:12:26.100 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.359 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.618 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:26.618 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.186 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.755 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.322 00:12:28.322 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.322 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.322 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.582 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.582 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.582 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.582 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.582 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.582 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.582 { 00:12:28.582 "cntlid": 89, 00:12:28.582 "qid": 0, 00:12:28.582 "state": "enabled", 00:12:28.582 "thread": "nvmf_tgt_poll_group_000", 00:12:28.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:28.582 "listen_address": { 00:12:28.582 "trtype": "TCP", 00:12:28.582 "adrfam": "IPv4", 00:12:28.582 "traddr": "10.0.0.3", 00:12:28.582 "trsvcid": "4420" 00:12:28.582 }, 00:12:28.582 "peer_address": { 00:12:28.582 "trtype": "TCP", 00:12:28.582 "adrfam": "IPv4", 00:12:28.582 "traddr": "10.0.0.1", 00:12:28.582 "trsvcid": "59052" 00:12:28.582 }, 00:12:28.582 "auth": { 00:12:28.582 "state": "completed", 00:12:28.582 "digest": "sha384", 00:12:28.582 "dhgroup": "ffdhe8192" 00:12:28.582 } 00:12:28.582 } 00:12:28.582 ]' 00:12:28.582 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.582 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.582 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.842 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.842 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.842 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.842 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.842 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.101 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:29.101 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.671 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.277 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.845 00:12:30.845 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.845 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.845 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.103 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.104 { 00:12:31.104 "cntlid": 91, 00:12:31.104 "qid": 0, 00:12:31.104 "state": "enabled", 00:12:31.104 "thread": "nvmf_tgt_poll_group_000", 00:12:31.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:31.104 "listen_address": { 00:12:31.104 "trtype": "TCP", 00:12:31.104 "adrfam": "IPv4", 00:12:31.104 "traddr": "10.0.0.3", 00:12:31.104 "trsvcid": "4420" 00:12:31.104 }, 00:12:31.104 "peer_address": { 00:12:31.104 "trtype": "TCP", 00:12:31.104 "adrfam": "IPv4", 00:12:31.104 "traddr": "10.0.0.1", 00:12:31.104 "trsvcid": "37054" 00:12:31.104 }, 00:12:31.104 "auth": { 00:12:31.104 "state": "completed", 00:12:31.104 "digest": "sha384", 00:12:31.104 "dhgroup": "ffdhe8192" 00:12:31.104 } 00:12:31.104 } 00:12:31.104 ]' 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.104 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.362 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.362 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.362 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.621 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:31.621 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.188 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.756 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.325 00:12:33.325 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.325 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.325 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.584 { 00:12:33.584 "cntlid": 93, 00:12:33.584 "qid": 0, 00:12:33.584 "state": "enabled", 00:12:33.584 "thread": "nvmf_tgt_poll_group_000", 00:12:33.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:33.584 "listen_address": { 00:12:33.584 "trtype": "TCP", 00:12:33.584 "adrfam": "IPv4", 00:12:33.584 "traddr": "10.0.0.3", 00:12:33.584 "trsvcid": "4420" 00:12:33.584 }, 00:12:33.584 "peer_address": { 00:12:33.584 "trtype": "TCP", 00:12:33.584 "adrfam": "IPv4", 00:12:33.584 "traddr": "10.0.0.1", 00:12:33.584 "trsvcid": "37078" 00:12:33.584 }, 00:12:33.584 "auth": { 00:12:33.584 "state": "completed", 00:12:33.584 "digest": "sha384", 00:12:33.584 "dhgroup": "ffdhe8192" 00:12:33.584 } 00:12:33.584 } 00:12:33.584 ]' 00:12:33.584 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.584 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.584 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.584 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.584 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.857 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.857 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.857 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.115 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:34.115 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:34.681 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.940 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.532 00:12:35.532 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.532 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.532 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.790 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.790 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.790 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.790 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.049 { 00:12:36.049 "cntlid": 95, 00:12:36.049 "qid": 0, 00:12:36.049 "state": "enabled", 00:12:36.049 "thread": "nvmf_tgt_poll_group_000", 00:12:36.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:36.049 "listen_address": { 00:12:36.049 "trtype": "TCP", 00:12:36.049 "adrfam": "IPv4", 00:12:36.049 "traddr": "10.0.0.3", 00:12:36.049 "trsvcid": "4420" 00:12:36.049 }, 00:12:36.049 "peer_address": { 00:12:36.049 "trtype": "TCP", 00:12:36.049 "adrfam": "IPv4", 00:12:36.049 "traddr": "10.0.0.1", 00:12:36.049 "trsvcid": "37096" 00:12:36.049 }, 00:12:36.049 "auth": { 00:12:36.049 "state": "completed", 00:12:36.049 "digest": "sha384", 00:12:36.049 "dhgroup": "ffdhe8192" 00:12:36.049 } 00:12:36.049 } 00:12:36.049 ]' 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.049 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.308 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:36.308 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:37.243 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.501 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.759 00:12:37.759 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.759 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.759 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.018 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.018 { 00:12:38.018 "cntlid": 97, 00:12:38.018 "qid": 0, 00:12:38.018 "state": "enabled", 00:12:38.018 "thread": "nvmf_tgt_poll_group_000", 00:12:38.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:38.018 "listen_address": { 00:12:38.018 "trtype": "TCP", 00:12:38.018 "adrfam": "IPv4", 00:12:38.018 "traddr": "10.0.0.3", 00:12:38.018 "trsvcid": "4420" 00:12:38.018 }, 00:12:38.018 "peer_address": { 00:12:38.018 "trtype": "TCP", 00:12:38.019 "adrfam": "IPv4", 00:12:38.019 "traddr": "10.0.0.1", 00:12:38.019 "trsvcid": "37124" 00:12:38.019 }, 00:12:38.019 "auth": { 00:12:38.019 "state": "completed", 00:12:38.019 "digest": "sha512", 00:12:38.019 "dhgroup": "null" 00:12:38.019 } 00:12:38.019 } 00:12:38.019 ]' 00:12:38.019 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.019 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.019 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.277 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:38.277 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.277 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.277 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.277 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.535 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:38.535 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:39.105 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.105 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:39.105 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.105 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.105 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.106 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.106 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:39.106 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.698 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.959 00:12:39.959 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.959 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.959 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.219 { 00:12:40.219 "cntlid": 99, 00:12:40.219 "qid": 0, 00:12:40.219 "state": "enabled", 00:12:40.219 "thread": "nvmf_tgt_poll_group_000", 00:12:40.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:40.219 "listen_address": { 00:12:40.219 "trtype": "TCP", 00:12:40.219 "adrfam": "IPv4", 00:12:40.219 "traddr": "10.0.0.3", 00:12:40.219 "trsvcid": "4420" 00:12:40.219 }, 00:12:40.219 "peer_address": { 00:12:40.219 "trtype": "TCP", 00:12:40.219 "adrfam": "IPv4", 00:12:40.219 "traddr": "10.0.0.1", 00:12:40.219 "trsvcid": "37138" 00:12:40.219 }, 00:12:40.219 "auth": { 00:12:40.219 "state": "completed", 00:12:40.219 "digest": "sha512", 00:12:40.219 "dhgroup": "null" 00:12:40.219 } 00:12:40.219 } 00:12:40.219 ]' 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:40.219 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.478 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.478 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.478 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.737 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:40.737 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:41.305 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.305 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:41.305 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.306 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.306 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.306 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.306 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.306 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.565 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.824 00:12:42.083 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.083 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.083 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.342 { 00:12:42.342 "cntlid": 101, 00:12:42.342 "qid": 0, 00:12:42.342 "state": "enabled", 00:12:42.342 "thread": "nvmf_tgt_poll_group_000", 00:12:42.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:42.342 "listen_address": { 00:12:42.342 "trtype": "TCP", 00:12:42.342 "adrfam": "IPv4", 00:12:42.342 "traddr": "10.0.0.3", 00:12:42.342 "trsvcid": "4420" 00:12:42.342 }, 00:12:42.342 "peer_address": { 00:12:42.342 "trtype": "TCP", 00:12:42.342 "adrfam": "IPv4", 00:12:42.342 "traddr": "10.0.0.1", 00:12:42.342 "trsvcid": "57728" 00:12:42.342 }, 00:12:42.342 "auth": { 00:12:42.342 "state": "completed", 00:12:42.342 "digest": "sha512", 00:12:42.342 "dhgroup": "null" 00:12:42.342 } 00:12:42.342 } 00:12:42.342 ]' 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:42.342 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.601 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.601 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.601 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.860 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:42.860 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:43.428 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.996 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.254 00:12:44.254 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.254 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.254 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.512 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.512 { 00:12:44.512 "cntlid": 103, 00:12:44.512 "qid": 0, 00:12:44.512 "state": "enabled", 00:12:44.512 "thread": "nvmf_tgt_poll_group_000", 00:12:44.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:44.513 "listen_address": { 00:12:44.513 "trtype": "TCP", 00:12:44.513 "adrfam": "IPv4", 00:12:44.513 "traddr": "10.0.0.3", 00:12:44.513 "trsvcid": "4420" 00:12:44.513 }, 00:12:44.513 "peer_address": { 00:12:44.513 "trtype": "TCP", 00:12:44.513 "adrfam": "IPv4", 00:12:44.513 "traddr": "10.0.0.1", 00:12:44.513 "trsvcid": "57746" 00:12:44.513 }, 00:12:44.513 "auth": { 00:12:44.513 "state": "completed", 00:12:44.513 "digest": "sha512", 00:12:44.513 "dhgroup": "null" 00:12:44.513 } 00:12:44.513 } 00:12:44.513 ]' 00:12:44.513 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.513 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.513 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.513 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:44.513 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.771 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.771 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.771 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.030 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:45.030 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.598 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.858 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.426 00:12:46.426 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.426 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.426 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.685 { 00:12:46.685 "cntlid": 105, 00:12:46.685 "qid": 0, 00:12:46.685 "state": "enabled", 00:12:46.685 "thread": "nvmf_tgt_poll_group_000", 00:12:46.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:46.685 "listen_address": { 00:12:46.685 "trtype": "TCP", 00:12:46.685 "adrfam": "IPv4", 00:12:46.685 "traddr": "10.0.0.3", 00:12:46.685 "trsvcid": "4420" 00:12:46.685 }, 00:12:46.685 "peer_address": { 00:12:46.685 "trtype": "TCP", 00:12:46.685 "adrfam": "IPv4", 00:12:46.685 "traddr": "10.0.0.1", 00:12:46.685 "trsvcid": "57768" 00:12:46.685 }, 00:12:46.685 "auth": { 00:12:46.685 "state": "completed", 00:12:46.685 "digest": "sha512", 00:12:46.685 "dhgroup": "ffdhe2048" 00:12:46.685 } 00:12:46.685 } 00:12:46.685 ]' 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.685 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.945 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.945 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.945 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.205 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:47.205 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:47.773 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.341 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.615 00:12:48.615 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.615 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.615 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.886 { 00:12:48.886 "cntlid": 107, 00:12:48.886 "qid": 0, 00:12:48.886 "state": "enabled", 00:12:48.886 "thread": "nvmf_tgt_poll_group_000", 00:12:48.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:48.886 "listen_address": { 00:12:48.886 "trtype": "TCP", 00:12:48.886 "adrfam": "IPv4", 00:12:48.886 "traddr": "10.0.0.3", 00:12:48.886 "trsvcid": "4420" 00:12:48.886 }, 00:12:48.886 "peer_address": { 00:12:48.886 "trtype": "TCP", 00:12:48.886 "adrfam": "IPv4", 00:12:48.886 "traddr": "10.0.0.1", 00:12:48.886 "trsvcid": "57798" 00:12:48.886 }, 00:12:48.886 "auth": { 00:12:48.886 "state": "completed", 00:12:48.886 "digest": "sha512", 00:12:48.886 "dhgroup": "ffdhe2048" 00:12:48.886 } 00:12:48.886 } 00:12:48.886 ]' 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.886 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.145 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:49.145 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.083 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.343 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.602 00:12:50.602 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.602 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.602 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.860 { 00:12:50.860 "cntlid": 109, 00:12:50.860 "qid": 0, 00:12:50.860 "state": "enabled", 00:12:50.860 "thread": "nvmf_tgt_poll_group_000", 00:12:50.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:50.860 "listen_address": { 00:12:50.860 "trtype": "TCP", 00:12:50.860 "adrfam": "IPv4", 00:12:50.860 "traddr": "10.0.0.3", 00:12:50.860 "trsvcid": "4420" 00:12:50.860 }, 00:12:50.860 "peer_address": { 00:12:50.860 "trtype": "TCP", 00:12:50.860 "adrfam": "IPv4", 00:12:50.860 "traddr": "10.0.0.1", 00:12:50.860 "trsvcid": "41516" 00:12:50.860 }, 00:12:50.860 "auth": { 00:12:50.860 "state": "completed", 00:12:50.860 "digest": "sha512", 00:12:50.860 "dhgroup": "ffdhe2048" 00:12:50.860 } 00:12:50.860 } 00:12:50.860 ]' 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.860 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.119 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.119 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.119 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.119 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.119 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.378 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:51.378 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:51.945 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.946 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:51.946 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.946 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.204 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.204 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.204 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.204 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.463 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.722 00:12:52.722 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.722 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.722 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.981 { 00:12:52.981 "cntlid": 111, 00:12:52.981 "qid": 0, 00:12:52.981 "state": "enabled", 00:12:52.981 "thread": "nvmf_tgt_poll_group_000", 00:12:52.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:52.981 "listen_address": { 00:12:52.981 "trtype": "TCP", 00:12:52.981 "adrfam": "IPv4", 00:12:52.981 "traddr": "10.0.0.3", 00:12:52.981 "trsvcid": "4420" 00:12:52.981 }, 00:12:52.981 "peer_address": { 00:12:52.981 "trtype": "TCP", 00:12:52.981 "adrfam": "IPv4", 00:12:52.981 "traddr": "10.0.0.1", 00:12:52.981 "trsvcid": "41532" 00:12:52.981 }, 00:12:52.981 "auth": { 00:12:52.981 "state": "completed", 00:12:52.981 "digest": "sha512", 00:12:52.981 "dhgroup": "ffdhe2048" 00:12:52.981 } 00:12:52.981 } 00:12:52.981 ]' 00:12:52.981 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.239 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.239 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.239 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.240 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.240 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.240 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.240 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.498 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:53.498 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.436 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.004 00:12:55.004 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.004 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.004 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.263 { 00:12:55.263 "cntlid": 113, 00:12:55.263 "qid": 0, 00:12:55.263 "state": "enabled", 00:12:55.263 "thread": "nvmf_tgt_poll_group_000", 00:12:55.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:55.263 "listen_address": { 00:12:55.263 "trtype": "TCP", 00:12:55.263 "adrfam": "IPv4", 00:12:55.263 "traddr": "10.0.0.3", 00:12:55.263 "trsvcid": "4420" 00:12:55.263 }, 00:12:55.263 "peer_address": { 00:12:55.263 "trtype": "TCP", 00:12:55.263 "adrfam": "IPv4", 00:12:55.263 "traddr": "10.0.0.1", 00:12:55.263 "trsvcid": "41554" 00:12:55.263 }, 00:12:55.263 "auth": { 00:12:55.263 "state": "completed", 00:12:55.263 "digest": "sha512", 00:12:55.263 "dhgroup": "ffdhe3072" 00:12:55.263 } 00:12:55.263 } 00:12:55.263 ]' 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.263 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.831 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:55.831 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.397 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.656 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.225 00:12:57.225 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.225 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.225 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.483 { 00:12:57.483 "cntlid": 115, 00:12:57.483 "qid": 0, 00:12:57.483 "state": "enabled", 00:12:57.483 "thread": "nvmf_tgt_poll_group_000", 00:12:57.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:57.483 "listen_address": { 00:12:57.483 "trtype": "TCP", 00:12:57.483 "adrfam": "IPv4", 00:12:57.483 "traddr": "10.0.0.3", 00:12:57.483 "trsvcid": "4420" 00:12:57.483 }, 00:12:57.483 "peer_address": { 00:12:57.483 "trtype": "TCP", 00:12:57.483 "adrfam": "IPv4", 00:12:57.483 "traddr": "10.0.0.1", 00:12:57.483 "trsvcid": "41578" 00:12:57.483 }, 00:12:57.483 "auth": { 00:12:57.483 "state": "completed", 00:12:57.483 "digest": "sha512", 00:12:57.483 "dhgroup": "ffdhe3072" 00:12:57.483 } 00:12:57.483 } 00:12:57.483 ]' 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.483 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.742 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.742 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.742 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.018 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:58.018 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.607 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.865 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.431 00:12:59.432 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.432 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.432 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.691 { 00:12:59.691 "cntlid": 117, 00:12:59.691 "qid": 0, 00:12:59.691 "state": "enabled", 00:12:59.691 "thread": "nvmf_tgt_poll_group_000", 00:12:59.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:12:59.691 "listen_address": { 00:12:59.691 "trtype": "TCP", 00:12:59.691 "adrfam": "IPv4", 00:12:59.691 "traddr": "10.0.0.3", 00:12:59.691 "trsvcid": "4420" 00:12:59.691 }, 00:12:59.691 "peer_address": { 00:12:59.691 "trtype": "TCP", 00:12:59.691 "adrfam": "IPv4", 00:12:59.691 "traddr": "10.0.0.1", 00:12:59.691 "trsvcid": "41592" 00:12:59.691 }, 00:12:59.691 "auth": { 00:12:59.691 "state": "completed", 00:12:59.691 "digest": "sha512", 00:12:59.691 "dhgroup": "ffdhe3072" 00:12:59.691 } 00:12:59.691 } 00:12:59.691 ]' 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.691 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.692 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.692 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.692 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.950 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:12:59.950 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.887 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.146 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.146 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.146 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.146 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.405 00:13:01.405 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.405 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.405 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.664 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.664 { 00:13:01.664 "cntlid": 119, 00:13:01.664 "qid": 0, 00:13:01.664 "state": "enabled", 00:13:01.664 "thread": "nvmf_tgt_poll_group_000", 00:13:01.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:01.664 "listen_address": { 00:13:01.664 "trtype": "TCP", 00:13:01.664 "adrfam": "IPv4", 00:13:01.664 "traddr": "10.0.0.3", 00:13:01.664 "trsvcid": "4420" 00:13:01.664 }, 00:13:01.664 "peer_address": { 00:13:01.664 "trtype": "TCP", 00:13:01.664 "adrfam": "IPv4", 00:13:01.664 "traddr": "10.0.0.1", 00:13:01.664 "trsvcid": "59834" 00:13:01.665 }, 00:13:01.665 "auth": { 00:13:01.665 "state": "completed", 00:13:01.665 "digest": "sha512", 00:13:01.665 "dhgroup": "ffdhe3072" 00:13:01.665 } 00:13:01.665 } 00:13:01.665 ]' 00:13:01.665 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.665 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.665 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.923 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.923 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.923 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.923 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.923 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.182 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:02.183 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.750 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:02.751 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.751 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.751 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.010 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.579 00:13:03.579 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.579 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.579 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.579 { 00:13:03.579 "cntlid": 121, 00:13:03.579 "qid": 0, 00:13:03.579 "state": "enabled", 00:13:03.579 "thread": "nvmf_tgt_poll_group_000", 00:13:03.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:03.579 "listen_address": { 00:13:03.579 "trtype": "TCP", 00:13:03.579 "adrfam": "IPv4", 00:13:03.579 "traddr": "10.0.0.3", 00:13:03.579 "trsvcid": "4420" 00:13:03.579 }, 00:13:03.579 "peer_address": { 00:13:03.579 "trtype": "TCP", 00:13:03.579 "adrfam": "IPv4", 00:13:03.579 "traddr": "10.0.0.1", 00:13:03.579 "trsvcid": "59864" 00:13:03.579 }, 00:13:03.579 "auth": { 00:13:03.579 "state": "completed", 00:13:03.579 "digest": "sha512", 00:13:03.579 "dhgroup": "ffdhe4096" 00:13:03.579 } 00:13:03.579 } 00:13:03.579 ]' 00:13:03.579 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.836 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.836 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.836 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.836 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.837 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.837 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.837 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.094 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:04.095 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.664 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.231 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.490 00:13:05.490 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.490 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.490 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.749 { 00:13:05.749 "cntlid": 123, 00:13:05.749 "qid": 0, 00:13:05.749 "state": "enabled", 00:13:05.749 "thread": "nvmf_tgt_poll_group_000", 00:13:05.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:05.749 "listen_address": { 00:13:05.749 "trtype": "TCP", 00:13:05.749 "adrfam": "IPv4", 00:13:05.749 "traddr": "10.0.0.3", 00:13:05.749 "trsvcid": "4420" 00:13:05.749 }, 00:13:05.749 "peer_address": { 00:13:05.749 "trtype": "TCP", 00:13:05.749 "adrfam": "IPv4", 00:13:05.749 "traddr": "10.0.0.1", 00:13:05.749 "trsvcid": "59904" 00:13:05.749 }, 00:13:05.749 "auth": { 00:13:05.749 "state": "completed", 00:13:05.749 "digest": "sha512", 00:13:05.749 "dhgroup": "ffdhe4096" 00:13:05.749 } 00:13:05.749 } 00:13:05.749 ]' 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.749 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.008 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:06.008 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.008 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.008 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.008 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.267 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:06.267 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.834 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.402 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.660 00:13:07.660 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.660 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.660 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.918 { 00:13:07.918 "cntlid": 125, 00:13:07.918 "qid": 0, 00:13:07.918 "state": "enabled", 00:13:07.918 "thread": "nvmf_tgt_poll_group_000", 00:13:07.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:07.918 "listen_address": { 00:13:07.918 "trtype": "TCP", 00:13:07.918 "adrfam": "IPv4", 00:13:07.918 "traddr": "10.0.0.3", 00:13:07.918 "trsvcid": "4420" 00:13:07.918 }, 00:13:07.918 "peer_address": { 00:13:07.918 "trtype": "TCP", 00:13:07.918 "adrfam": "IPv4", 00:13:07.918 "traddr": "10.0.0.1", 00:13:07.918 "trsvcid": "59932" 00:13:07.918 }, 00:13:07.918 "auth": { 00:13:07.918 "state": "completed", 00:13:07.918 "digest": "sha512", 00:13:07.918 "dhgroup": "ffdhe4096" 00:13:07.918 } 00:13:07.918 } 00:13:07.918 ]' 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.918 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.176 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.176 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.176 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.433 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:08.433 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:08.999 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.257 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.514 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:09.514 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.514 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.515 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.773 00:13:09.773 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.773 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.773 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.338 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.338 { 00:13:10.338 "cntlid": 127, 00:13:10.338 "qid": 0, 00:13:10.338 "state": "enabled", 00:13:10.338 "thread": "nvmf_tgt_poll_group_000", 00:13:10.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:10.338 "listen_address": { 00:13:10.339 "trtype": "TCP", 00:13:10.339 "adrfam": "IPv4", 00:13:10.339 "traddr": "10.0.0.3", 00:13:10.339 "trsvcid": "4420" 00:13:10.339 }, 00:13:10.339 "peer_address": { 00:13:10.339 "trtype": "TCP", 00:13:10.339 "adrfam": "IPv4", 00:13:10.339 "traddr": "10.0.0.1", 00:13:10.339 "trsvcid": "59970" 00:13:10.339 }, 00:13:10.339 "auth": { 00:13:10.339 "state": "completed", 00:13:10.339 "digest": "sha512", 00:13:10.339 "dhgroup": "ffdhe4096" 00:13:10.339 } 00:13:10.339 } 00:13:10.339 ]' 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.339 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.596 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:10.596 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.162 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.728 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.986 00:13:11.986 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.986 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.986 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.244 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.244 { 00:13:12.244 "cntlid": 129, 00:13:12.244 "qid": 0, 00:13:12.244 "state": "enabled", 00:13:12.244 "thread": "nvmf_tgt_poll_group_000", 00:13:12.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:12.244 "listen_address": { 00:13:12.245 "trtype": "TCP", 00:13:12.245 "adrfam": "IPv4", 00:13:12.245 "traddr": "10.0.0.3", 00:13:12.245 "trsvcid": "4420" 00:13:12.245 }, 00:13:12.245 "peer_address": { 00:13:12.245 "trtype": "TCP", 00:13:12.245 "adrfam": "IPv4", 00:13:12.245 "traddr": "10.0.0.1", 00:13:12.245 "trsvcid": "49110" 00:13:12.245 }, 00:13:12.245 "auth": { 00:13:12.245 "state": "completed", 00:13:12.245 "digest": "sha512", 00:13:12.245 "dhgroup": "ffdhe6144" 00:13:12.245 } 00:13:12.245 } 00:13:12.245 ]' 00:13:12.245 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.245 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.245 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.502 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.502 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.502 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.502 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.502 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.760 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:12.760 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.357 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.184 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.184 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.443 { 00:13:14.443 "cntlid": 131, 00:13:14.443 "qid": 0, 00:13:14.443 "state": "enabled", 00:13:14.443 "thread": "nvmf_tgt_poll_group_000", 00:13:14.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:14.443 "listen_address": { 00:13:14.443 "trtype": "TCP", 00:13:14.443 "adrfam": "IPv4", 00:13:14.443 "traddr": "10.0.0.3", 00:13:14.443 "trsvcid": "4420" 00:13:14.443 }, 00:13:14.443 "peer_address": { 00:13:14.443 "trtype": "TCP", 00:13:14.443 "adrfam": "IPv4", 00:13:14.443 "traddr": "10.0.0.1", 00:13:14.443 "trsvcid": "49142" 00:13:14.443 }, 00:13:14.443 "auth": { 00:13:14.443 "state": "completed", 00:13:14.443 "digest": "sha512", 00:13:14.443 "dhgroup": "ffdhe6144" 00:13:14.443 } 00:13:14.443 } 00:13:14.443 ]' 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.443 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.702 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:14.702 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.270 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.529 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:15.529 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.529 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.529 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.529 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.097 00:13:16.097 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.097 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.097 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.357 { 00:13:16.357 "cntlid": 133, 00:13:16.357 "qid": 0, 00:13:16.357 "state": "enabled", 00:13:16.357 "thread": "nvmf_tgt_poll_group_000", 00:13:16.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:16.357 "listen_address": { 00:13:16.357 "trtype": "TCP", 00:13:16.357 "adrfam": "IPv4", 00:13:16.357 "traddr": "10.0.0.3", 00:13:16.357 "trsvcid": "4420" 00:13:16.357 }, 00:13:16.357 "peer_address": { 00:13:16.357 "trtype": "TCP", 00:13:16.357 "adrfam": "IPv4", 00:13:16.357 "traddr": "10.0.0.1", 00:13:16.357 "trsvcid": "49170" 00:13:16.357 }, 00:13:16.357 "auth": { 00:13:16.357 "state": "completed", 00:13:16.357 "digest": "sha512", 00:13:16.357 "dhgroup": "ffdhe6144" 00:13:16.357 } 00:13:16.357 } 00:13:16.357 ]' 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.357 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.926 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:16.926 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.495 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.755 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.324 00:13:18.324 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.324 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.324 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.582 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.583 { 00:13:18.583 "cntlid": 135, 00:13:18.583 "qid": 0, 00:13:18.583 "state": "enabled", 00:13:18.583 "thread": "nvmf_tgt_poll_group_000", 00:13:18.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:18.583 "listen_address": { 00:13:18.583 "trtype": "TCP", 00:13:18.583 "adrfam": "IPv4", 00:13:18.583 "traddr": "10.0.0.3", 00:13:18.583 "trsvcid": "4420" 00:13:18.583 }, 00:13:18.583 "peer_address": { 00:13:18.583 "trtype": "TCP", 00:13:18.583 "adrfam": "IPv4", 00:13:18.583 "traddr": "10.0.0.1", 00:13:18.583 "trsvcid": "49216" 00:13:18.583 }, 00:13:18.583 "auth": { 00:13:18.583 "state": "completed", 00:13:18.583 "digest": "sha512", 00:13:18.583 "dhgroup": "ffdhe6144" 00:13:18.583 } 00:13:18.583 } 00:13:18.583 ]' 00:13:18.583 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.583 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.583 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.842 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.842 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.842 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.842 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.842 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.101 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:19.101 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:19.670 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.929 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.863 00:13:20.863 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.863 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.863 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.120 { 00:13:21.120 "cntlid": 137, 00:13:21.120 "qid": 0, 00:13:21.120 "state": "enabled", 00:13:21.120 "thread": "nvmf_tgt_poll_group_000", 00:13:21.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:21.120 "listen_address": { 00:13:21.120 "trtype": "TCP", 00:13:21.120 "adrfam": "IPv4", 00:13:21.120 "traddr": "10.0.0.3", 00:13:21.120 "trsvcid": "4420" 00:13:21.120 }, 00:13:21.120 "peer_address": { 00:13:21.120 "trtype": "TCP", 00:13:21.120 "adrfam": "IPv4", 00:13:21.120 "traddr": "10.0.0.1", 00:13:21.120 "trsvcid": "56538" 00:13:21.120 }, 00:13:21.120 "auth": { 00:13:21.120 "state": "completed", 00:13:21.120 "digest": "sha512", 00:13:21.120 "dhgroup": "ffdhe8192" 00:13:21.120 } 00:13:21.120 } 00:13:21.120 ]' 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.120 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.686 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:21.686 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.253 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.510 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.443 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.443 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.703 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.703 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.703 { 00:13:23.703 "cntlid": 139, 00:13:23.703 "qid": 0, 00:13:23.703 "state": "enabled", 00:13:23.703 "thread": "nvmf_tgt_poll_group_000", 00:13:23.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:23.703 "listen_address": { 00:13:23.703 "trtype": "TCP", 00:13:23.703 "adrfam": "IPv4", 00:13:23.703 "traddr": "10.0.0.3", 00:13:23.703 "trsvcid": "4420" 00:13:23.703 }, 00:13:23.703 "peer_address": { 00:13:23.703 "trtype": "TCP", 00:13:23.703 "adrfam": "IPv4", 00:13:23.703 "traddr": "10.0.0.1", 00:13:23.703 "trsvcid": "56572" 00:13:23.703 }, 00:13:23.703 "auth": { 00:13:23.703 "state": "completed", 00:13:23.703 "digest": "sha512", 00:13:23.703 "dhgroup": "ffdhe8192" 00:13:23.703 } 00:13:23.703 } 00:13:23.703 ]' 00:13:23.703 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.703 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.961 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:23.961 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: --dhchap-ctrl-secret DHHC-1:02:YjI2YTc4ZGUyOTVlZTY2NTRhNmFmNDNjYWI0NTc0YzdiODcyOWY4ODYwN2EyOTBjx/AH0w==: 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.897 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.155 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.155 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.155 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.723 00:13:25.723 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.723 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.723 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.983 { 00:13:25.983 "cntlid": 141, 00:13:25.983 "qid": 0, 00:13:25.983 "state": "enabled", 00:13:25.983 "thread": "nvmf_tgt_poll_group_000", 00:13:25.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:25.983 "listen_address": { 00:13:25.983 "trtype": "TCP", 00:13:25.983 "adrfam": "IPv4", 00:13:25.983 "traddr": "10.0.0.3", 00:13:25.983 "trsvcid": "4420" 00:13:25.983 }, 00:13:25.983 "peer_address": { 00:13:25.983 "trtype": "TCP", 00:13:25.983 "adrfam": "IPv4", 00:13:25.983 "traddr": "10.0.0.1", 00:13:25.983 "trsvcid": "56608" 00:13:25.983 }, 00:13:25.983 "auth": { 00:13:25.983 "state": "completed", 00:13:25.983 "digest": "sha512", 00:13:25.983 "dhgroup": "ffdhe8192" 00:13:25.983 } 00:13:25.983 } 00:13:25.983 ]' 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.983 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.551 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:26.551 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:01:ZmY3MDZlOTVhYzJkODQ3MDIyM2RmMGMzZjA5YTdlMWUAC9wz: 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.120 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.380 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.949 00:13:27.949 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.949 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.949 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.208 { 00:13:28.208 "cntlid": 143, 00:13:28.208 "qid": 0, 00:13:28.208 "state": "enabled", 00:13:28.208 "thread": "nvmf_tgt_poll_group_000", 00:13:28.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:28.208 "listen_address": { 00:13:28.208 "trtype": "TCP", 00:13:28.208 "adrfam": "IPv4", 00:13:28.208 "traddr": "10.0.0.3", 00:13:28.208 "trsvcid": "4420" 00:13:28.208 }, 00:13:28.208 "peer_address": { 00:13:28.208 "trtype": "TCP", 00:13:28.208 "adrfam": "IPv4", 00:13:28.208 "traddr": "10.0.0.1", 00:13:28.208 "trsvcid": "56620" 00:13:28.208 }, 00:13:28.208 "auth": { 00:13:28.208 "state": "completed", 00:13:28.208 "digest": "sha512", 00:13:28.208 "dhgroup": "ffdhe8192" 00:13:28.208 } 00:13:28.208 } 00:13:28.208 ]' 00:13:28.208 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.534 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.814 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:28.814 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:29.382 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.950 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.516 00:13:30.516 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.516 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.516 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.776 { 00:13:30.776 "cntlid": 145, 00:13:30.776 "qid": 0, 00:13:30.776 "state": "enabled", 00:13:30.776 "thread": "nvmf_tgt_poll_group_000", 00:13:30.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:30.776 "listen_address": { 00:13:30.776 "trtype": "TCP", 00:13:30.776 "adrfam": "IPv4", 00:13:30.776 "traddr": "10.0.0.3", 00:13:30.776 "trsvcid": "4420" 00:13:30.776 }, 00:13:30.776 "peer_address": { 00:13:30.776 "trtype": "TCP", 00:13:30.776 "adrfam": "IPv4", 00:13:30.776 "traddr": "10.0.0.1", 00:13:30.776 "trsvcid": "53054" 00:13:30.776 }, 00:13:30.776 "auth": { 00:13:30.776 "state": "completed", 00:13:30.776 "digest": "sha512", 00:13:30.776 "dhgroup": "ffdhe8192" 00:13:30.776 } 00:13:30.776 } 00:13:30.776 ]' 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.776 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.345 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:31.345 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:00:NWI5ZDFjMjg3MWZkOGVlODg0ZTgwZTdlODUyMTMyZmFkNWEzZjMwNGQ3YzkxYjFmUUfP2w==: --dhchap-ctrl-secret DHHC-1:03:MWM2MzIzYmFjZTUxNDNhMGYzODJiNjY1YTVmMmE5MzczYjRkMDBmOGVkYzgwY2JjOWZiMGM1YjM4N2VmMzkwYqD4XxE=: 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:31.913 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:32.490 request: 00:13:32.490 { 00:13:32.490 "name": "nvme0", 00:13:32.490 "trtype": "tcp", 00:13:32.490 "traddr": "10.0.0.3", 00:13:32.490 "adrfam": "ipv4", 00:13:32.490 "trsvcid": "4420", 00:13:32.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:32.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:32.490 "prchk_reftag": false, 00:13:32.490 "prchk_guard": false, 00:13:32.490 "hdgst": false, 00:13:32.490 "ddgst": false, 00:13:32.490 "dhchap_key": "key2", 00:13:32.490 "allow_unrecognized_csi": false, 00:13:32.490 "method": "bdev_nvme_attach_controller", 00:13:32.490 "req_id": 1 00:13:32.490 } 00:13:32.490 Got JSON-RPC error response 00:13:32.490 response: 00:13:32.490 { 00:13:32.490 "code": -5, 00:13:32.490 "message": "Input/output error" 00:13:32.490 } 00:13:32.490 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:32.490 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.490 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.490 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:32.491 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:33.071 request: 00:13:33.071 { 00:13:33.071 "name": "nvme0", 00:13:33.071 "trtype": "tcp", 00:13:33.071 "traddr": "10.0.0.3", 00:13:33.071 "adrfam": "ipv4", 00:13:33.071 "trsvcid": "4420", 00:13:33.071 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:33.071 "prchk_reftag": false, 00:13:33.071 "prchk_guard": false, 00:13:33.071 "hdgst": false, 00:13:33.072 "ddgst": false, 00:13:33.072 "dhchap_key": "key1", 00:13:33.072 "dhchap_ctrlr_key": "ckey2", 00:13:33.072 "allow_unrecognized_csi": false, 00:13:33.072 "method": "bdev_nvme_attach_controller", 00:13:33.072 "req_id": 1 00:13:33.072 } 00:13:33.072 Got JSON-RPC error response 00:13:33.072 response: 00:13:33.072 { 00:13:33.072 "code": -5, 00:13:33.072 "message": "Input/output error" 00:13:33.072 } 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.072 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.638 request: 00:13:33.638 { 00:13:33.638 "name": "nvme0", 00:13:33.638 "trtype": "tcp", 00:13:33.638 "traddr": "10.0.0.3", 00:13:33.638 "adrfam": "ipv4", 00:13:33.638 "trsvcid": "4420", 00:13:33.638 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:33.638 "prchk_reftag": false, 00:13:33.638 "prchk_guard": false, 00:13:33.638 "hdgst": false, 00:13:33.638 "ddgst": false, 00:13:33.638 "dhchap_key": "key1", 00:13:33.638 "dhchap_ctrlr_key": "ckey1", 00:13:33.638 "allow_unrecognized_csi": false, 00:13:33.638 "method": "bdev_nvme_attach_controller", 00:13:33.638 "req_id": 1 00:13:33.638 } 00:13:33.638 Got JSON-RPC error response 00:13:33.638 response: 00:13:33.638 { 00:13:33.638 "code": -5, 00:13:33.638 "message": "Input/output error" 00:13:33.638 } 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67308 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67308 ']' 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67308 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67308 00:13:33.638 killing process with pid 67308 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67308' 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67308 00:13:33.638 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67308 00:13:33.638 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:33.638 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.638 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.638 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.638 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70434 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70434 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70434 ']' 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.639 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70434 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70434 ']' 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.206 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.465 null0 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sxu 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Le9 ]] 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le9 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cgB 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.465 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Pe4 ]] 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pe4 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Olc 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZSi ]] 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZSi 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.725 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gah 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.725 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.663 nvme0n1 00:13:35.663 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.663 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.663 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.663 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.663 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.663 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.663 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.923 { 00:13:35.923 "cntlid": 1, 00:13:35.923 "qid": 0, 00:13:35.923 "state": "enabled", 00:13:35.923 "thread": "nvmf_tgt_poll_group_000", 00:13:35.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:35.923 "listen_address": { 00:13:35.923 "trtype": "TCP", 00:13:35.923 "adrfam": "IPv4", 00:13:35.923 "traddr": "10.0.0.3", 00:13:35.923 "trsvcid": "4420" 00:13:35.923 }, 00:13:35.923 "peer_address": { 00:13:35.923 "trtype": "TCP", 00:13:35.923 "adrfam": "IPv4", 00:13:35.923 "traddr": "10.0.0.1", 00:13:35.923 "trsvcid": "53126" 00:13:35.923 }, 00:13:35.923 "auth": { 00:13:35.923 "state": "completed", 00:13:35.923 "digest": "sha512", 00:13:35.923 "dhgroup": "ffdhe8192" 00:13:35.923 } 00:13:35.923 } 00:13:35.923 ]' 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.923 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.924 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.924 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.924 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.182 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:36.182 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key3 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:37.120 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.379 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.638 request: 00:13:37.638 { 00:13:37.638 "name": "nvme0", 00:13:37.638 "trtype": "tcp", 00:13:37.639 "traddr": "10.0.0.3", 00:13:37.639 "adrfam": "ipv4", 00:13:37.639 "trsvcid": "4420", 00:13:37.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:37.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:37.639 "prchk_reftag": false, 00:13:37.639 "prchk_guard": false, 00:13:37.639 "hdgst": false, 00:13:37.639 "ddgst": false, 00:13:37.639 "dhchap_key": "key3", 00:13:37.639 "allow_unrecognized_csi": false, 00:13:37.639 "method": "bdev_nvme_attach_controller", 00:13:37.639 "req_id": 1 00:13:37.639 } 00:13:37.639 Got JSON-RPC error response 00:13:37.639 response: 00:13:37.639 { 00:13:37.639 "code": -5, 00:13:37.639 "message": "Input/output error" 00:13:37.639 } 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:37.639 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.898 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.158 request: 00:13:38.158 { 00:13:38.158 "name": "nvme0", 00:13:38.158 "trtype": "tcp", 00:13:38.158 "traddr": "10.0.0.3", 00:13:38.158 "adrfam": "ipv4", 00:13:38.158 "trsvcid": "4420", 00:13:38.158 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:38.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:38.158 "prchk_reftag": false, 00:13:38.158 "prchk_guard": false, 00:13:38.158 "hdgst": false, 00:13:38.158 "ddgst": false, 00:13:38.158 "dhchap_key": "key3", 00:13:38.158 "allow_unrecognized_csi": false, 00:13:38.158 "method": "bdev_nvme_attach_controller", 00:13:38.158 "req_id": 1 00:13:38.158 } 00:13:38.158 Got JSON-RPC error response 00:13:38.158 response: 00:13:38.158 { 00:13:38.158 "code": -5, 00:13:38.158 "message": "Input/output error" 00:13:38.158 } 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:38.158 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.417 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.984 request: 00:13:38.984 { 00:13:38.984 "name": "nvme0", 00:13:38.984 "trtype": "tcp", 00:13:38.984 "traddr": "10.0.0.3", 00:13:38.984 "adrfam": "ipv4", 00:13:38.984 "trsvcid": "4420", 00:13:38.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:38.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:38.984 "prchk_reftag": false, 00:13:38.984 "prchk_guard": false, 00:13:38.984 "hdgst": false, 00:13:38.984 "ddgst": false, 00:13:38.984 "dhchap_key": "key0", 00:13:38.984 "dhchap_ctrlr_key": "key1", 00:13:38.984 "allow_unrecognized_csi": false, 00:13:38.984 "method": "bdev_nvme_attach_controller", 00:13:38.984 "req_id": 1 00:13:38.984 } 00:13:38.984 Got JSON-RPC error response 00:13:38.984 response: 00:13:38.984 { 00:13:38.984 "code": -5, 00:13:38.984 "message": "Input/output error" 00:13:38.984 } 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:38.984 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:39.253 nvme0n1 00:13:39.253 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:39.253 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.253 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:39.524 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.524 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.524 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:39.784 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:40.724 nvme0n1 00:13:40.724 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:40.724 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.724 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:40.724 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.984 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.984 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:40.984 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -l 0 --dhchap-secret DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: --dhchap-ctrl-secret DHHC-1:03:NjE5MmQ4N2E2MTAyNGY2Njc2MDhmMjJjYWI0NmRlMjdmZGQwZTIwMmVhNTlhNmQ1ZjY1OTFmYzQxNTE1NjgxMX2xT98=: 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.918 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:42.176 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:42.743 request: 00:13:42.743 { 00:13:42.743 "name": "nvme0", 00:13:42.743 "trtype": "tcp", 00:13:42.743 "traddr": "10.0.0.3", 00:13:42.743 "adrfam": "ipv4", 00:13:42.743 "trsvcid": "4420", 00:13:42.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:42.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31", 00:13:42.743 "prchk_reftag": false, 00:13:42.743 "prchk_guard": false, 00:13:42.743 "hdgst": false, 00:13:42.743 "ddgst": false, 00:13:42.743 "dhchap_key": "key1", 00:13:42.743 "allow_unrecognized_csi": false, 00:13:42.743 "method": "bdev_nvme_attach_controller", 00:13:42.743 "req_id": 1 00:13:42.743 } 00:13:42.743 Got JSON-RPC error response 00:13:42.743 response: 00:13:42.743 { 00:13:42.743 "code": -5, 00:13:42.743 "message": "Input/output error" 00:13:42.743 } 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:42.743 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:43.677 nvme0n1 00:13:43.677 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:43.677 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:43.677 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.936 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.936 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.936 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:44.195 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:44.764 nvme0n1 00:13:44.764 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:44.764 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:44.764 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.023 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.023 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.023 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: '' 2s 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: ]] 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODViMjE2MWFjNDQzNDI0MGNjZWM4YjM4YmE3ZTFhNWNoaaqR: 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:45.282 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:47.187 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:47.188 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:47.188 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:47.188 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:47.188 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:47.188 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: 2s 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: ]] 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDk4ODg3OTgzNGY4ZmEyYmU1YWYzZGYxNjViNmEzZDQ2MWVlNzQ2MGJlZGUzZDcyva0OSQ==: 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:47.447 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:49.363 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:50.299 nvme0n1 00:13:50.299 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:50.299 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.299 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.300 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.300 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:50.300 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.236 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.496 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.496 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:51.496 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:51.754 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:51.754 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:51.754 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.013 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.013 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:52.013 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:52.014 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:52.581 request: 00:13:52.581 { 00:13:52.581 "name": "nvme0", 00:13:52.581 "dhchap_key": "key1", 00:13:52.581 "dhchap_ctrlr_key": "key3", 00:13:52.581 "method": "bdev_nvme_set_keys", 00:13:52.581 "req_id": 1 00:13:52.581 } 00:13:52.581 Got JSON-RPC error response 00:13:52.581 response: 00:13:52.581 { 00:13:52.581 "code": -13, 00:13:52.581 "message": "Permission denied" 00:13:52.581 } 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.581 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:53.151 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:53.151 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:54.087 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:54.087 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.087 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:54.345 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:55.283 nvme0n1 00:13:55.283 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.283 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.283 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.283 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:55.284 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:55.881 request: 00:13:55.881 { 00:13:55.881 "name": "nvme0", 00:13:55.881 "dhchap_key": "key2", 00:13:55.881 "dhchap_ctrlr_key": "key0", 00:13:55.881 "method": "bdev_nvme_set_keys", 00:13:55.881 "req_id": 1 00:13:55.881 } 00:13:55.881 Got JSON-RPC error response 00:13:55.881 response: 00:13:55.881 { 00:13:55.881 "code": -13, 00:13:55.881 "message": "Permission denied" 00:13:55.881 } 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.881 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:56.474 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:56.474 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:57.411 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:57.411 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:57.411 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67333 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67333 ']' 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67333 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.670 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67333 00:13:57.670 killing process with pid 67333 00:13:57.670 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:57.670 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:57.670 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67333' 00:13:57.670 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67333 00:13:57.670 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67333 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.238 rmmod nvme_tcp 00:13:58.238 rmmod nvme_fabrics 00:13:58.238 rmmod nvme_keyring 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70434 ']' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70434 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70434 ']' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70434 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70434 00:13:58.238 killing process with pid 70434 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70434' 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70434 00:13:58.238 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70434 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:58.498 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Sxu /tmp/spdk.key-sha256.cgB /tmp/spdk.key-sha384.Olc /tmp/spdk.key-sha512.gah /tmp/spdk.key-sha512.Le9 /tmp/spdk.key-sha384.Pe4 /tmp/spdk.key-sha256.ZSi '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:58.757 00:13:58.757 real 3m15.830s 00:13:58.757 user 7m49.878s 00:13:58.757 sys 0m30.522s 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.757 ************************************ 00:13:58.757 END TEST nvmf_auth_target 00:13:58.757 ************************************ 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.757 ************************************ 00:13:58.757 START TEST nvmf_bdevio_no_huge 00:13:58.757 ************************************ 00:13:58.757 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:59.017 * Looking for test storage... 00:13:59.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.018 --rc genhtml_branch_coverage=1 00:13:59.018 --rc genhtml_function_coverage=1 00:13:59.018 --rc genhtml_legend=1 00:13:59.018 --rc geninfo_all_blocks=1 00:13:59.018 --rc geninfo_unexecuted_blocks=1 00:13:59.018 00:13:59.018 ' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.018 --rc genhtml_branch_coverage=1 00:13:59.018 --rc genhtml_function_coverage=1 00:13:59.018 --rc genhtml_legend=1 00:13:59.018 --rc geninfo_all_blocks=1 00:13:59.018 --rc geninfo_unexecuted_blocks=1 00:13:59.018 00:13:59.018 ' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.018 --rc genhtml_branch_coverage=1 00:13:59.018 --rc genhtml_function_coverage=1 00:13:59.018 --rc genhtml_legend=1 00:13:59.018 --rc geninfo_all_blocks=1 00:13:59.018 --rc geninfo_unexecuted_blocks=1 00:13:59.018 00:13:59.018 ' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.018 --rc genhtml_branch_coverage=1 00:13:59.018 --rc genhtml_function_coverage=1 00:13:59.018 --rc genhtml_legend=1 00:13:59.018 --rc geninfo_all_blocks=1 00:13:59.018 --rc geninfo_unexecuted_blocks=1 00:13:59.018 00:13:59.018 ' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.018 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.018 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:59.019 Cannot find device "nvmf_init_br" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:59.019 Cannot find device "nvmf_init_br2" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:59.019 Cannot find device "nvmf_tgt_br" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.019 Cannot find device "nvmf_tgt_br2" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:59.019 Cannot find device "nvmf_init_br" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:59.019 Cannot find device "nvmf_init_br2" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:59.019 Cannot find device "nvmf_tgt_br" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:59.019 Cannot find device "nvmf_tgt_br2" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:59.019 Cannot find device "nvmf_br" 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:59.019 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:59.278 Cannot find device "nvmf_init_if" 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:59.278 Cannot find device "nvmf_init_if2" 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:59.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:59.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:59.278 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:59.279 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:59.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:59.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:13:59.539 00:13:59.539 --- 10.0.0.3 ping statistics --- 00:13:59.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.539 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:59.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:59.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:13:59.539 00:13:59.539 --- 10.0.0.4 ping statistics --- 00:13:59.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.539 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:59.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:13:59.539 00:13:59.539 --- 10.0.0.1 ping statistics --- 00:13:59.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.539 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:59.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:59.539 00:13:59.539 --- 10.0.0.2 ping statistics --- 00:13:59.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.539 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71069 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71069 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71069 ']' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.539 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:59.539 [2024-11-29 13:00:30.909710] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:13:59.539 [2024-11-29 13:00:30.909799] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:59.798 [2024-11-29 13:00:31.067861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.798 [2024-11-29 13:00:31.150455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.798 [2024-11-29 13:00:31.150530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.798 [2024-11-29 13:00:31.150544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.798 [2024-11-29 13:00:31.150554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.798 [2024-11-29 13:00:31.150563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.798 [2024-11-29 13:00:31.151241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:13:59.798 [2024-11-29 13:00:31.151377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:13:59.798 [2024-11-29 13:00:31.151490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:13:59.798 [2024-11-29 13:00:31.151493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.798 [2024-11-29 13:00:31.157917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 [2024-11-29 13:00:31.361030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 Malloc0 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 [2024-11-29 13:00:31.401401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:00.058 { 00:14:00.058 "params": { 00:14:00.058 "name": "Nvme$subsystem", 00:14:00.058 "trtype": "$TEST_TRANSPORT", 00:14:00.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:00.058 "adrfam": "ipv4", 00:14:00.058 "trsvcid": "$NVMF_PORT", 00:14:00.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:00.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:00.058 "hdgst": ${hdgst:-false}, 00:14:00.058 "ddgst": ${ddgst:-false} 00:14:00.058 }, 00:14:00.058 "method": "bdev_nvme_attach_controller" 00:14:00.058 } 00:14:00.058 EOF 00:14:00.058 )") 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:00.058 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:00.058 "params": { 00:14:00.058 "name": "Nvme1", 00:14:00.058 "trtype": "tcp", 00:14:00.058 "traddr": "10.0.0.3", 00:14:00.058 "adrfam": "ipv4", 00:14:00.058 "trsvcid": "4420", 00:14:00.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.058 "hdgst": false, 00:14:00.058 "ddgst": false 00:14:00.058 }, 00:14:00.058 "method": "bdev_nvme_attach_controller" 00:14:00.058 }' 00:14:00.058 [2024-11-29 13:00:31.457104] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:00.058 [2024-11-29 13:00:31.457203] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71092 ] 00:14:00.317 [2024-11-29 13:00:31.605896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:00.317 [2024-11-29 13:00:31.672781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.317 [2024-11-29 13:00:31.673815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.317 [2024-11-29 13:00:31.673861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.317 [2024-11-29 13:00:31.686658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.587 I/O targets: 00:14:00.588 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:00.588 00:14:00.588 00:14:00.588 CUnit - A unit testing framework for C - Version 2.1-3 00:14:00.588 http://cunit.sourceforge.net/ 00:14:00.588 00:14:00.588 00:14:00.588 Suite: bdevio tests on: Nvme1n1 00:14:00.588 Test: blockdev write read block ...passed 00:14:00.588 Test: blockdev write zeroes read block ...passed 00:14:00.588 Test: blockdev write zeroes read no split ...passed 00:14:00.588 Test: blockdev write zeroes read split ...passed 00:14:00.588 Test: blockdev write zeroes read split partial ...passed 00:14:00.588 Test: blockdev reset ...[2024-11-29 13:00:31.919839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:00.588 [2024-11-29 13:00:31.919973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150e320 (9): Bad file descriptor 00:14:00.588 passed 00:14:00.588 Test: blockdev write read 8 blocks ...[2024-11-29 13:00:31.933817] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:00.588 passed 00:14:00.588 Test: blockdev write read size > 128k ...passed 00:14:00.588 Test: blockdev write read invalid size ...passed 00:14:00.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:00.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:00.588 Test: blockdev write read max offset ...passed 00:14:00.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:00.588 Test: blockdev writev readv 8 blocks ...passed 00:14:00.588 Test: blockdev writev readv 30 x 1block ...passed 00:14:00.588 Test: blockdev writev readv block ...passed 00:14:00.588 Test: blockdev writev readv size > 128k ...passed 00:14:00.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:00.588 Test: blockdev comparev and writev ...[2024-11-29 13:00:31.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.588 [2024-11-29 13:00:31.942778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.942819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.943296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.943327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.943362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.943674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.943697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.943718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.943731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.944018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.944068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:00.589 [2024-11-29 13:00:31.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:00.589 passed 00:14:00.589 Test: blockdev nvme passthru rw ...passed 00:14:00.589 Test: blockdev nvme passthru vendor specific ...[2024-11-29 13:00:31.944957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:00.589 [2024-11-29 13:00:31.944986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:00.589 [2024-11-29 13:00:31.945114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:00.589 [2024-11-29 13:00:31.945135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:00.590 [2024-11-29 13:00:31.945267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:00.590 [2024-11-29 13:00:31.945287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:00.590 [2024-11-29 13:00:31.945409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:00.590 [2024-11-29 13:00:31.945429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:00.590 passed 00:14:00.590 Test: blockdev nvme admin passthru ...passed 00:14:00.590 Test: blockdev copy ...passed 00:14:00.590 00:14:00.590 Run Summary: Type Total Ran Passed Failed Inactive 00:14:00.590 suites 1 1 n/a 0 0 00:14:00.590 tests 23 23 23 0 0 00:14:00.590 asserts 152 152 152 0 n/a 00:14:00.590 00:14:00.590 Elapsed time = 0.170 seconds 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.861 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.120 rmmod nvme_tcp 00:14:01.120 rmmod nvme_fabrics 00:14:01.120 rmmod nvme_keyring 00:14:01.120 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.120 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:01.120 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:01.120 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71069 ']' 00:14:01.120 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71069 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71069 ']' 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71069 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71069 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:01.121 killing process with pid 71069 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71069' 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71069 00:14:01.121 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71069 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:01.379 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:01.637 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:01.637 00:14:01.637 real 0m2.906s 00:14:01.637 user 0m7.939s 00:14:01.637 sys 0m1.426s 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.637 ************************************ 00:14:01.637 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:01.637 END TEST nvmf_bdevio_no_huge 00:14:01.637 ************************************ 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.897 ************************************ 00:14:01.897 START TEST nvmf_tls 00:14:01.897 ************************************ 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:01.897 * Looking for test storage... 00:14:01.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:01.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.897 --rc genhtml_branch_coverage=1 00:14:01.897 --rc genhtml_function_coverage=1 00:14:01.897 --rc genhtml_legend=1 00:14:01.897 --rc geninfo_all_blocks=1 00:14:01.897 --rc geninfo_unexecuted_blocks=1 00:14:01.897 00:14:01.897 ' 00:14:01.897 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:01.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.897 --rc genhtml_branch_coverage=1 00:14:01.897 --rc genhtml_function_coverage=1 00:14:01.897 --rc genhtml_legend=1 00:14:01.897 --rc geninfo_all_blocks=1 00:14:01.897 --rc geninfo_unexecuted_blocks=1 00:14:01.897 00:14:01.897 ' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:01.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.898 --rc genhtml_branch_coverage=1 00:14:01.898 --rc genhtml_function_coverage=1 00:14:01.898 --rc genhtml_legend=1 00:14:01.898 --rc geninfo_all_blocks=1 00:14:01.898 --rc geninfo_unexecuted_blocks=1 00:14:01.898 00:14:01.898 ' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:01.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.898 --rc genhtml_branch_coverage=1 00:14:01.898 --rc genhtml_function_coverage=1 00:14:01.898 --rc genhtml_legend=1 00:14:01.898 --rc geninfo_all_blocks=1 00:14:01.898 --rc geninfo_unexecuted_blocks=1 00:14:01.898 00:14:01.898 ' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.898 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:01.898 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:02.157 Cannot find device "nvmf_init_br" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:02.157 Cannot find device "nvmf_init_br2" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:02.157 Cannot find device "nvmf_tgt_br" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.157 Cannot find device "nvmf_tgt_br2" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:02.157 Cannot find device "nvmf_init_br" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:02.157 Cannot find device "nvmf_init_br2" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:02.157 Cannot find device "nvmf_tgt_br" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:02.157 Cannot find device "nvmf_tgt_br2" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:02.157 Cannot find device "nvmf_br" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:02.157 Cannot find device "nvmf_init_if" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:02.157 Cannot find device "nvmf_init_if2" 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:02.157 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:02.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:02.417 00:14:02.417 --- 10.0.0.3 ping statistics --- 00:14:02.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.417 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:02.417 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:02.417 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:14:02.417 00:14:02.417 --- 10.0.0.4 ping statistics --- 00:14:02.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.417 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:02.417 00:14:02.417 --- 10.0.0.1 ping statistics --- 00:14:02.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.417 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:02.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:14:02.417 00:14:02.417 --- 10.0.0.2 ping statistics --- 00:14:02.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.417 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71331 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71331 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71331 ']' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.417 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.417 [2024-11-29 13:00:33.916906] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:02.417 [2024-11-29 13:00:33.917007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.677 [2024-11-29 13:00:34.074621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.677 [2024-11-29 13:00:34.140974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.677 [2024-11-29 13:00:34.141033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.677 [2024-11-29 13:00:34.141046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.677 [2024-11-29 13:00:34.141056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.677 [2024-11-29 13:00:34.141066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.677 [2024-11-29 13:00:34.141506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.677 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.677 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:02.677 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.677 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.677 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.938 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.938 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:02.938 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:03.197 true 00:14:03.197 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:03.197 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:03.456 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:03.456 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:03.456 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:03.714 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:03.714 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:03.972 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:03.972 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:03.972 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:04.229 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:04.229 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:04.487 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:04.487 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:04.487 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:04.487 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:04.745 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:04.745 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:04.745 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:05.003 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:05.003 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:05.265 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:05.265 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:05.265 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:05.834 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:05.834 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:06.090 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hZbdiziWuO 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.OSkxWV24Tw 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hZbdiziWuO 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.OSkxWV24Tw 00:14:06.091 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:06.347 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:06.604 [2024-11-29 13:00:38.055274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.861 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hZbdiziWuO 00:14:06.861 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hZbdiziWuO 00:14:06.861 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:07.128 [2024-11-29 13:00:38.387189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.128 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:07.394 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:07.657 [2024-11-29 13:00:38.943361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:07.657 [2024-11-29 13:00:38.943783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:07.657 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:07.925 malloc0 00:14:07.925 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:08.195 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hZbdiziWuO 00:14:08.487 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.755 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hZbdiziWuO 00:14:18.739 Initializing NVMe Controllers 00:14:18.739 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.739 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:18.739 Initialization complete. Launching workers. 00:14:18.739 ======================================================== 00:14:18.739 Latency(us) 00:14:18.739 Device Information : IOPS MiB/s Average min max 00:14:18.739 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9006.23 35.18 7107.91 1007.98 12619.66 00:14:18.739 ======================================================== 00:14:18.739 Total : 9006.23 35.18 7107.91 1007.98 12619.66 00:14:18.739 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hZbdiziWuO 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hZbdiziWuO 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71567 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71567 /var/tmp/bdevperf.sock 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71567 ']' 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.998 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.998 [2024-11-29 13:00:50.317718] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:18.998 [2024-11-29 13:00:50.317819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71567 ] 00:14:18.998 [2024-11-29 13:00:50.469147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.256 [2024-11-29 13:00:50.530996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.256 [2024-11-29 13:00:50.589561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.822 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.822 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:19.822 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hZbdiziWuO 00:14:20.080 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.338 [2024-11-29 13:00:51.804777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.597 TLSTESTn1 00:14:20.597 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:20.597 Running I/O for 10 seconds... 00:14:22.905 3697.00 IOPS, 14.44 MiB/s [2024-11-29T13:00:55.352Z] 3674.50 IOPS, 14.35 MiB/s [2024-11-29T13:00:56.284Z] 3756.33 IOPS, 14.67 MiB/s [2024-11-29T13:00:57.216Z] 3862.50 IOPS, 15.09 MiB/s [2024-11-29T13:00:58.148Z] 3887.60 IOPS, 15.19 MiB/s [2024-11-29T13:00:59.100Z] 3914.33 IOPS, 15.29 MiB/s [2024-11-29T13:01:00.033Z] 3944.00 IOPS, 15.41 MiB/s [2024-11-29T13:01:01.408Z] 3964.25 IOPS, 15.49 MiB/s [2024-11-29T13:01:02.341Z] 3945.22 IOPS, 15.41 MiB/s [2024-11-29T13:01:02.341Z] 3934.40 IOPS, 15.37 MiB/s 00:14:30.826 Latency(us) 00:14:30.826 [2024-11-29T13:01:02.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.826 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:30.826 Verification LBA range: start 0x0 length 0x2000 00:14:30.826 TLSTESTn1 : 10.02 3939.84 15.39 0.00 0.00 32427.08 7179.17 29074.15 00:14:30.826 [2024-11-29T13:01:02.341Z] =================================================================================================================== 00:14:30.826 [2024-11-29T13:01:02.341Z] Total : 3939.84 15.39 0.00 0.00 32427.08 7179.17 29074.15 00:14:30.826 { 00:14:30.826 "results": [ 00:14:30.826 { 00:14:30.826 "job": "TLSTESTn1", 00:14:30.826 "core_mask": "0x4", 00:14:30.826 "workload": "verify", 00:14:30.826 "status": "finished", 00:14:30.826 "verify_range": { 00:14:30.826 "start": 0, 00:14:30.826 "length": 8192 00:14:30.826 }, 00:14:30.826 "queue_depth": 128, 00:14:30.826 "io_size": 4096, 00:14:30.826 "runtime": 10.018171, 00:14:30.826 "iops": 3939.8409150732205, 00:14:30.826 "mibps": 15.390003574504767, 00:14:30.826 "io_failed": 0, 00:14:30.826 "io_timeout": 0, 00:14:30.826 "avg_latency_us": 32427.077930211668, 00:14:30.826 "min_latency_us": 7179.170909090909, 00:14:30.826 "max_latency_us": 29074.15272727273 00:14:30.826 } 00:14:30.826 ], 00:14:30.826 "core_count": 1 00:14:30.826 } 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71567 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71567 ']' 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71567 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71567 00:14:30.826 killing process with pid 71567 00:14:30.826 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.826 00:14:30.826 Latency(us) 00:14:30.826 [2024-11-29T13:01:02.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.826 [2024-11-29T13:01:02.341Z] =================================================================================================================== 00:14:30.826 [2024-11-29T13:01:02.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71567' 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71567 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71567 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OSkxWV24Tw 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OSkxWV24Tw 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OSkxWV24Tw 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:30.826 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OSkxWV24Tw 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71703 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71703 /var/tmp/bdevperf.sock 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71703 ']' 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.827 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.084 [2024-11-29 13:01:02.351029] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:31.084 [2024-11-29 13:01:02.351786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71703 ] 00:14:31.084 [2024-11-29 13:01:02.506846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.084 [2024-11-29 13:01:02.560958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.344 [2024-11-29 13:01:02.614345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.914 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.914 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:31.914 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OSkxWV24Tw 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:32.482 [2024-11-29 13:01:03.926861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.482 [2024-11-29 13:01:03.937626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:32.482 [2024-11-29 13:01:03.937698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3eff0 (107): Transport endpoint is not connected 00:14:32.482 [2024-11-29 13:01:03.938688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3eff0 (9): Bad file descriptor 00:14:32.482 [2024-11-29 13:01:03.939694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:32.482 [2024-11-29 13:01:03.939731] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:32.482 [2024-11-29 13:01:03.939758] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:32.482 [2024-11-29 13:01:03.939773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:32.482 request: 00:14:32.482 { 00:14:32.482 "name": "TLSTEST", 00:14:32.482 "trtype": "tcp", 00:14:32.482 "traddr": "10.0.0.3", 00:14:32.482 "adrfam": "ipv4", 00:14:32.482 "trsvcid": "4420", 00:14:32.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.482 "prchk_reftag": false, 00:14:32.482 "prchk_guard": false, 00:14:32.482 "hdgst": false, 00:14:32.482 "ddgst": false, 00:14:32.482 "psk": "key0", 00:14:32.482 "allow_unrecognized_csi": false, 00:14:32.482 "method": "bdev_nvme_attach_controller", 00:14:32.482 "req_id": 1 00:14:32.482 } 00:14:32.482 Got JSON-RPC error response 00:14:32.482 response: 00:14:32.482 { 00:14:32.482 "code": -5, 00:14:32.482 "message": "Input/output error" 00:14:32.482 } 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71703 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71703 ']' 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71703 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71703 00:14:32.482 killing process with pid 71703 00:14:32.482 Received shutdown signal, test time was about 10.000000 seconds 00:14:32.482 00:14:32.482 Latency(us) 00:14:32.482 [2024-11-29T13:01:03.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.482 [2024-11-29T13:01:03.997Z] =================================================================================================================== 00:14:32.482 [2024-11-29T13:01:03.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71703' 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71703 00:14:32.482 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71703 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hZbdiziWuO 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hZbdiziWuO 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:32.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hZbdiziWuO 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hZbdiziWuO 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71737 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71737 /var/tmp/bdevperf.sock 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71737 ']' 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.742 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.742 [2024-11-29 13:01:04.244369] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:32.742 [2024-11-29 13:01:04.244467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71737 ] 00:14:33.002 [2024-11-29 13:01:04.386343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.002 [2024-11-29 13:01:04.446722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.002 [2024-11-29 13:01:04.501933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.261 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.261 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:33.261 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hZbdiziWuO 00:14:33.548 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:33.843 [2024-11-29 13:01:05.060282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.843 [2024-11-29 13:01:05.067670] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:33.843 [2024-11-29 13:01:05.067724] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:33.843 [2024-11-29 13:01:05.067788] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:33.843 [2024-11-29 13:01:05.068055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf38ff0 (107): Transport endpoint is not connected 00:14:33.844 [2024-11-29 13:01:05.069035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf38ff0 (9): Bad file descriptor 00:14:33.844 [2024-11-29 13:01:05.070032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:33.844 [2024-11-29 13:01:05.070069] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:33.844 [2024-11-29 13:01:05.070079] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:33.844 [2024-11-29 13:01:05.070093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:33.844 request: 00:14:33.844 { 00:14:33.844 "name": "TLSTEST", 00:14:33.844 "trtype": "tcp", 00:14:33.844 "traddr": "10.0.0.3", 00:14:33.844 "adrfam": "ipv4", 00:14:33.844 "trsvcid": "4420", 00:14:33.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.844 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:33.844 "prchk_reftag": false, 00:14:33.844 "prchk_guard": false, 00:14:33.844 "hdgst": false, 00:14:33.844 "ddgst": false, 00:14:33.844 "psk": "key0", 00:14:33.844 "allow_unrecognized_csi": false, 00:14:33.844 "method": "bdev_nvme_attach_controller", 00:14:33.844 "req_id": 1 00:14:33.844 } 00:14:33.844 Got JSON-RPC error response 00:14:33.844 response: 00:14:33.844 { 00:14:33.844 "code": -5, 00:14:33.844 "message": "Input/output error" 00:14:33.844 } 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71737 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71737 ']' 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71737 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71737 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71737' 00:14:33.844 killing process with pid 71737 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71737 00:14:33.844 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.844 00:14:33.844 Latency(us) 00:14:33.844 [2024-11-29T13:01:05.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.844 [2024-11-29T13:01:05.359Z] =================================================================================================================== 00:14:33.844 [2024-11-29T13:01:05.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71737 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hZbdiziWuO 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hZbdiziWuO 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.844 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hZbdiziWuO 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hZbdiziWuO 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71758 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71758 /var/tmp/bdevperf.sock 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71758 ']' 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.845 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.108 [2024-11-29 13:01:05.363650] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:34.108 [2024-11-29 13:01:05.363755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71758 ] 00:14:34.108 [2024-11-29 13:01:05.504223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.108 [2024-11-29 13:01:05.554512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.108 [2024-11-29 13:01:05.610596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:34.367 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.367 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:34.367 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hZbdiziWuO 00:14:34.626 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:34.885 [2024-11-29 13:01:06.179264] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.885 [2024-11-29 13:01:06.187847] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:34.885 [2024-11-29 13:01:06.187914] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:34.885 [2024-11-29 13:01:06.187980] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:34.885 [2024-11-29 13:01:06.187994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2150ff0 (107): Transport endpoint is not connected 00:14:34.885 [2024-11-29 13:01:06.188972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2150ff0 (9): Bad file descriptor 00:14:34.885 [2024-11-29 13:01:06.189969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:34.885 [2024-11-29 13:01:06.190005] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:34.885 [2024-11-29 13:01:06.190016] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:34.885 [2024-11-29 13:01:06.190031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:34.885 request: 00:14:34.885 { 00:14:34.885 "name": "TLSTEST", 00:14:34.885 "trtype": "tcp", 00:14:34.885 "traddr": "10.0.0.3", 00:14:34.885 "adrfam": "ipv4", 00:14:34.885 "trsvcid": "4420", 00:14:34.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:34.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.885 "prchk_reftag": false, 00:14:34.885 "prchk_guard": false, 00:14:34.885 "hdgst": false, 00:14:34.885 "ddgst": false, 00:14:34.886 "psk": "key0", 00:14:34.886 "allow_unrecognized_csi": false, 00:14:34.886 "method": "bdev_nvme_attach_controller", 00:14:34.886 "req_id": 1 00:14:34.886 } 00:14:34.886 Got JSON-RPC error response 00:14:34.886 response: 00:14:34.886 { 00:14:34.886 "code": -5, 00:14:34.886 "message": "Input/output error" 00:14:34.886 } 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71758 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71758 ']' 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71758 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71758 00:14:34.886 killing process with pid 71758 00:14:34.886 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.886 00:14:34.886 Latency(us) 00:14:34.886 [2024-11-29T13:01:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.886 [2024-11-29T13:01:06.401Z] =================================================================================================================== 00:14:34.886 [2024-11-29T13:01:06.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71758' 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71758 00:14:34.886 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71758 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71779 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71779 /var/tmp/bdevperf.sock 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71779 ']' 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.145 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.145 [2024-11-29 13:01:06.493689] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:35.145 [2024-11-29 13:01:06.493984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71779 ] 00:14:35.145 [2024-11-29 13:01:06.636738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.404 [2024-11-29 13:01:06.688357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.405 [2024-11-29 13:01:06.741653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.405 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.405 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:35.405 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:35.663 [2024-11-29 13:01:07.074780] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:35.663 [2024-11-29 13:01:07.075101] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:35.663 request: 00:14:35.663 { 00:14:35.663 "name": "key0", 00:14:35.663 "path": "", 00:14:35.663 "method": "keyring_file_add_key", 00:14:35.663 "req_id": 1 00:14:35.663 } 00:14:35.663 Got JSON-RPC error response 00:14:35.663 response: 00:14:35.663 { 00:14:35.663 "code": -1, 00:14:35.663 "message": "Operation not permitted" 00:14:35.663 } 00:14:35.663 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:35.923 [2024-11-29 13:01:07.366956] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.923 [2024-11-29 13:01:07.367035] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:35.923 request: 00:14:35.923 { 00:14:35.923 "name": "TLSTEST", 00:14:35.923 "trtype": "tcp", 00:14:35.923 "traddr": "10.0.0.3", 00:14:35.923 "adrfam": "ipv4", 00:14:35.923 "trsvcid": "4420", 00:14:35.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.923 "prchk_reftag": false, 00:14:35.923 "prchk_guard": false, 00:14:35.923 "hdgst": false, 00:14:35.923 "ddgst": false, 00:14:35.923 "psk": "key0", 00:14:35.923 "allow_unrecognized_csi": false, 00:14:35.923 "method": "bdev_nvme_attach_controller", 00:14:35.923 "req_id": 1 00:14:35.923 } 00:14:35.923 Got JSON-RPC error response 00:14:35.923 response: 00:14:35.923 { 00:14:35.923 "code": -126, 00:14:35.923 "message": "Required key not available" 00:14:35.923 } 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71779 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71779 ']' 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71779 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71779 00:14:35.923 killing process with pid 71779 00:14:35.923 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.923 00:14:35.923 Latency(us) 00:14:35.923 [2024-11-29T13:01:07.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.923 [2024-11-29T13:01:07.438Z] =================================================================================================================== 00:14:35.923 [2024-11-29T13:01:07.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71779' 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71779 00:14:35.923 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71779 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71331 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71331 ']' 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71331 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71331 00:14:36.183 killing process with pid 71331 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71331' 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71331 00:14:36.183 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71331 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.nIbxcjPvpI 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.nIbxcjPvpI 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71811 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71811 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71811 ']' 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.442 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 [2024-11-29 13:01:07.957054] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:36.701 [2024-11-29 13:01:07.957160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.701 [2024-11-29 13:01:08.099355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.701 [2024-11-29 13:01:08.153232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.701 [2024-11-29 13:01:08.153285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.701 [2024-11-29 13:01:08.153313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.701 [2024-11-29 13:01:08.153322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.701 [2024-11-29 13:01:08.153329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.701 [2024-11-29 13:01:08.153795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.701 [2024-11-29 13:01:08.211102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIbxcjPvpI 00:14:36.960 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:37.218 [2024-11-29 13:01:08.557412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.218 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:37.477 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:37.736 [2024-11-29 13:01:09.137558] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.736 [2024-11-29 13:01:09.138115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:37.736 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:37.995 malloc0 00:14:37.995 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:38.254 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:38.513 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIbxcjPvpI 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nIbxcjPvpI 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71859 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71859 /var/tmp/bdevperf.sock 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71859 ']' 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.772 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.772 [2024-11-29 13:01:10.249993] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:38.772 [2024-11-29 13:01:10.250324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71859 ] 00:14:39.031 [2024-11-29 13:01:10.394521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.031 [2024-11-29 13:01:10.452089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.031 [2024-11-29 13:01:10.506837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.968 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.968 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:39.968 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:39.968 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.225 [2024-11-29 13:01:11.703668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.483 TLSTESTn1 00:14:40.483 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:40.484 Running I/O for 10 seconds... 00:14:42.407 3806.00 IOPS, 14.87 MiB/s [2024-11-29T13:01:15.305Z] 3799.50 IOPS, 14.84 MiB/s [2024-11-29T13:01:16.240Z] 3811.00 IOPS, 14.89 MiB/s [2024-11-29T13:01:17.174Z] 3821.75 IOPS, 14.93 MiB/s [2024-11-29T13:01:18.109Z] 3821.80 IOPS, 14.93 MiB/s [2024-11-29T13:01:19.044Z] 3822.50 IOPS, 14.93 MiB/s [2024-11-29T13:01:19.979Z] 3826.29 IOPS, 14.95 MiB/s [2024-11-29T13:01:20.921Z] 3828.88 IOPS, 14.96 MiB/s [2024-11-29T13:01:22.299Z] 3833.22 IOPS, 14.97 MiB/s [2024-11-29T13:01:22.299Z] 3839.20 IOPS, 15.00 MiB/s 00:14:50.784 Latency(us) 00:14:50.784 [2024-11-29T13:01:22.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.784 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:50.784 Verification LBA range: start 0x0 length 0x2000 00:14:50.784 TLSTESTn1 : 10.02 3845.45 15.02 0.00 0.00 33231.79 5302.46 30980.65 00:14:50.784 [2024-11-29T13:01:22.299Z] =================================================================================================================== 00:14:50.784 [2024-11-29T13:01:22.299Z] Total : 3845.45 15.02 0.00 0.00 33231.79 5302.46 30980.65 00:14:50.784 { 00:14:50.784 "results": [ 00:14:50.784 { 00:14:50.784 "job": "TLSTESTn1", 00:14:50.784 "core_mask": "0x4", 00:14:50.784 "workload": "verify", 00:14:50.784 "status": "finished", 00:14:50.784 "verify_range": { 00:14:50.784 "start": 0, 00:14:50.784 "length": 8192 00:14:50.784 }, 00:14:50.784 "queue_depth": 128, 00:14:50.784 "io_size": 4096, 00:14:50.784 "runtime": 10.016764, 00:14:50.784 "iops": 3845.4534817831386, 00:14:50.784 "mibps": 15.021302663215385, 00:14:50.784 "io_failed": 0, 00:14:50.784 "io_timeout": 0, 00:14:50.784 "avg_latency_us": 33231.79462281896, 00:14:50.784 "min_latency_us": 5302.458181818181, 00:14:50.784 "max_latency_us": 30980.654545454545 00:14:50.784 } 00:14:50.784 ], 00:14:50.784 "core_count": 1 00:14:50.784 } 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71859 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71859 ']' 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71859 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71859 00:14:50.784 killing process with pid 71859 00:14:50.784 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.784 00:14:50.784 Latency(us) 00:14:50.784 [2024-11-29T13:01:22.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.784 [2024-11-29T13:01:22.299Z] =================================================================================================================== 00:14:50.784 [2024-11-29T13:01:22.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:50.784 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71859' 00:14:50.785 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71859 00:14:50.785 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71859 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.nIbxcjPvpI 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIbxcjPvpI 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIbxcjPvpI 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIbxcjPvpI 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nIbxcjPvpI 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72000 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72000 /var/tmp/bdevperf.sock 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72000 ']' 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.785 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.785 [2024-11-29 13:01:22.285450] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:50.785 [2024-11-29 13:01:22.285741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72000 ] 00:14:51.043 [2024-11-29 13:01:22.426642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.043 [2024-11-29 13:01:22.485942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.302 [2024-11-29 13:01:22.563130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.302 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.302 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:51.302 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:51.561 [2024-11-29 13:01:22.906236] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nIbxcjPvpI': 0100666 00:14:51.561 [2024-11-29 13:01:22.906328] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:51.561 request: 00:14:51.561 { 00:14:51.561 "name": "key0", 00:14:51.561 "path": "/tmp/tmp.nIbxcjPvpI", 00:14:51.561 "method": "keyring_file_add_key", 00:14:51.561 "req_id": 1 00:14:51.561 } 00:14:51.561 Got JSON-RPC error response 00:14:51.561 response: 00:14:51.561 { 00:14:51.561 "code": -1, 00:14:51.561 "message": "Operation not permitted" 00:14:51.561 } 00:14:51.561 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:51.819 [2024-11-29 13:01:23.186517] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:51.819 [2024-11-29 13:01:23.187061] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:51.819 request: 00:14:51.819 { 00:14:51.819 "name": "TLSTEST", 00:14:51.819 "trtype": "tcp", 00:14:51.819 "traddr": "10.0.0.3", 00:14:51.819 "adrfam": "ipv4", 00:14:51.819 "trsvcid": "4420", 00:14:51.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:51.819 "prchk_reftag": false, 00:14:51.819 "prchk_guard": false, 00:14:51.819 "hdgst": false, 00:14:51.819 "ddgst": false, 00:14:51.819 "psk": "key0", 00:14:51.819 "allow_unrecognized_csi": false, 00:14:51.819 "method": "bdev_nvme_attach_controller", 00:14:51.819 "req_id": 1 00:14:51.819 } 00:14:51.819 Got JSON-RPC error response 00:14:51.819 response: 00:14:51.819 { 00:14:51.819 "code": -126, 00:14:51.819 "message": "Required key not available" 00:14:51.819 } 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72000 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72000 ']' 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72000 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72000 00:14:51.819 killing process with pid 72000 00:14:51.819 Received shutdown signal, test time was about 10.000000 seconds 00:14:51.819 00:14:51.819 Latency(us) 00:14:51.819 [2024-11-29T13:01:23.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.819 [2024-11-29T13:01:23.334Z] =================================================================================================================== 00:14:51.819 [2024-11-29T13:01:23.334Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72000' 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72000 00:14:51.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72000 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71811 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71811 ']' 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71811 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71811 00:14:52.078 killing process with pid 71811 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71811' 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71811 00:14:52.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71811 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72032 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72032 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72032 ']' 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.597 [2024-11-29 13:01:23.875806] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:52.597 [2024-11-29 13:01:23.876154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.597 [2024-11-29 13:01:24.024685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.597 [2024-11-29 13:01:24.078531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.597 [2024-11-29 13:01:24.078933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.597 [2024-11-29 13:01:24.078970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.597 [2024-11-29 13:01:24.078979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.597 [2024-11-29 13:01:24.078987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.597 [2024-11-29 13:01:24.079413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.855 [2024-11-29 13:01:24.140468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIbxcjPvpI 00:14:53.422 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:53.992 [2024-11-29 13:01:25.203171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.992 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:54.251 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:54.510 [2024-11-29 13:01:25.783381] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.510 [2024-11-29 13:01:25.783705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.510 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:54.767 malloc0 00:14:54.767 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:55.025 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:55.285 [2024-11-29 13:01:26.739708] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nIbxcjPvpI': 0100666 00:14:55.285 [2024-11-29 13:01:26.739792] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:55.285 request: 00:14:55.285 { 00:14:55.285 "name": "key0", 00:14:55.285 "path": "/tmp/tmp.nIbxcjPvpI", 00:14:55.285 "method": "keyring_file_add_key", 00:14:55.285 "req_id": 1 00:14:55.285 } 00:14:55.285 Got JSON-RPC error response 00:14:55.285 response: 00:14:55.285 { 00:14:55.285 "code": -1, 00:14:55.285 "message": "Operation not permitted" 00:14:55.285 } 00:14:55.285 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:55.867 [2024-11-29 13:01:27.071861] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:55.867 [2024-11-29 13:01:27.072186] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:55.867 request: 00:14:55.867 { 00:14:55.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.867 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.867 "psk": "key0", 00:14:55.867 "method": "nvmf_subsystem_add_host", 00:14:55.867 "req_id": 1 00:14:55.867 } 00:14:55.867 Got JSON-RPC error response 00:14:55.867 response: 00:14:55.867 { 00:14:55.867 "code": -32603, 00:14:55.867 "message": "Internal error" 00:14:55.867 } 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72032 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72032 ']' 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72032 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.867 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72032 00:14:55.867 killing process with pid 72032 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72032' 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72032 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72032 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.nIbxcjPvpI 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:55.868 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72101 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72101 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72101 ']' 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.147 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.147 [2024-11-29 13:01:27.437812] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:56.147 [2024-11-29 13:01:27.437940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.147 [2024-11-29 13:01:27.592711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.407 [2024-11-29 13:01:27.660926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.407 [2024-11-29 13:01:27.661066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.407 [2024-11-29 13:01:27.661088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.407 [2024-11-29 13:01:27.661101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.407 [2024-11-29 13:01:27.661112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.407 [2024-11-29 13:01:27.661540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.407 [2024-11-29 13:01:27.723054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIbxcjPvpI 00:14:56.407 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:56.666 [2024-11-29 13:01:28.133228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.666 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.234 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:57.234 [2024-11-29 13:01:28.725394] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.234 [2024-11-29 13:01:28.725943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.234 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:57.493 malloc0 00:14:57.752 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.012 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:58.272 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72155 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72155 /var/tmp/bdevperf.sock 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72155 ']' 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.532 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.791 [2024-11-29 13:01:30.066098] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:14:58.791 [2024-11-29 13:01:30.066368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72155 ] 00:14:58.791 [2024-11-29 13:01:30.221021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.791 [2024-11-29 13:01:30.301925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.051 [2024-11-29 13:01:30.379825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:59.051 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.051 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:59.051 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:14:59.617 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:59.876 [2024-11-29 13:01:31.158906] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:59.876 TLSTESTn1 00:14:59.876 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:00.444 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:00.444 "subsystems": [ 00:15:00.444 { 00:15:00.444 "subsystem": "keyring", 00:15:00.444 "config": [ 00:15:00.444 { 00:15:00.444 "method": "keyring_file_add_key", 00:15:00.444 "params": { 00:15:00.444 "name": "key0", 00:15:00.444 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:00.444 } 00:15:00.444 } 00:15:00.444 ] 00:15:00.444 }, 00:15:00.444 { 00:15:00.444 "subsystem": "iobuf", 00:15:00.444 "config": [ 00:15:00.444 { 00:15:00.444 "method": "iobuf_set_options", 00:15:00.444 "params": { 00:15:00.444 "small_pool_count": 8192, 00:15:00.444 "large_pool_count": 1024, 00:15:00.444 "small_bufsize": 8192, 00:15:00.444 "large_bufsize": 135168, 00:15:00.444 "enable_numa": false 00:15:00.444 } 00:15:00.444 } 00:15:00.444 ] 00:15:00.444 }, 00:15:00.444 { 00:15:00.444 "subsystem": "sock", 00:15:00.444 "config": [ 00:15:00.444 { 00:15:00.444 "method": "sock_set_default_impl", 00:15:00.444 "params": { 00:15:00.444 "impl_name": "uring" 00:15:00.444 } 00:15:00.444 }, 00:15:00.444 { 00:15:00.444 "method": "sock_impl_set_options", 00:15:00.444 "params": { 00:15:00.444 "impl_name": "ssl", 00:15:00.444 "recv_buf_size": 4096, 00:15:00.444 "send_buf_size": 4096, 00:15:00.444 "enable_recv_pipe": true, 00:15:00.444 "enable_quickack": false, 00:15:00.444 "enable_placement_id": 0, 00:15:00.444 "enable_zerocopy_send_server": true, 00:15:00.444 "enable_zerocopy_send_client": false, 00:15:00.444 "zerocopy_threshold": 0, 00:15:00.444 "tls_version": 0, 00:15:00.444 "enable_ktls": false 00:15:00.444 } 00:15:00.444 }, 00:15:00.444 { 00:15:00.444 "method": "sock_impl_set_options", 00:15:00.444 "params": { 00:15:00.444 "impl_name": "posix", 00:15:00.444 "recv_buf_size": 2097152, 00:15:00.444 "send_buf_size": 2097152, 00:15:00.444 "enable_recv_pipe": true, 00:15:00.444 "enable_quickack": false, 00:15:00.445 "enable_placement_id": 0, 00:15:00.445 "enable_zerocopy_send_server": true, 00:15:00.445 "enable_zerocopy_send_client": false, 00:15:00.445 "zerocopy_threshold": 0, 00:15:00.445 "tls_version": 0, 00:15:00.445 "enable_ktls": false 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "sock_impl_set_options", 00:15:00.445 "params": { 00:15:00.445 "impl_name": "uring", 00:15:00.445 "recv_buf_size": 2097152, 00:15:00.445 "send_buf_size": 2097152, 00:15:00.445 "enable_recv_pipe": true, 00:15:00.445 "enable_quickack": false, 00:15:00.445 "enable_placement_id": 0, 00:15:00.445 "enable_zerocopy_send_server": false, 00:15:00.445 "enable_zerocopy_send_client": false, 00:15:00.445 "zerocopy_threshold": 0, 00:15:00.445 "tls_version": 0, 00:15:00.445 "enable_ktls": false 00:15:00.445 } 00:15:00.445 } 00:15:00.445 ] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "vmd", 00:15:00.445 "config": [] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "accel", 00:15:00.445 "config": [ 00:15:00.445 { 00:15:00.445 "method": "accel_set_options", 00:15:00.445 "params": { 00:15:00.445 "small_cache_size": 128, 00:15:00.445 "large_cache_size": 16, 00:15:00.445 "task_count": 2048, 00:15:00.445 "sequence_count": 2048, 00:15:00.445 "buf_count": 2048 00:15:00.445 } 00:15:00.445 } 00:15:00.445 ] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "bdev", 00:15:00.445 "config": [ 00:15:00.445 { 00:15:00.445 "method": "bdev_set_options", 00:15:00.445 "params": { 00:15:00.445 "bdev_io_pool_size": 65535, 00:15:00.445 "bdev_io_cache_size": 256, 00:15:00.445 "bdev_auto_examine": true, 00:15:00.445 "iobuf_small_cache_size": 128, 00:15:00.445 "iobuf_large_cache_size": 16 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_raid_set_options", 00:15:00.445 "params": { 00:15:00.445 "process_window_size_kb": 1024, 00:15:00.445 "process_max_bandwidth_mb_sec": 0 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_iscsi_set_options", 00:15:00.445 "params": { 00:15:00.445 "timeout_sec": 30 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_nvme_set_options", 00:15:00.445 "params": { 00:15:00.445 "action_on_timeout": "none", 00:15:00.445 "timeout_us": 0, 00:15:00.445 "timeout_admin_us": 0, 00:15:00.445 "keep_alive_timeout_ms": 10000, 00:15:00.445 "arbitration_burst": 0, 00:15:00.445 "low_priority_weight": 0, 00:15:00.445 "medium_priority_weight": 0, 00:15:00.445 "high_priority_weight": 0, 00:15:00.445 "nvme_adminq_poll_period_us": 10000, 00:15:00.445 "nvme_ioq_poll_period_us": 0, 00:15:00.445 "io_queue_requests": 0, 00:15:00.445 "delay_cmd_submit": true, 00:15:00.445 "transport_retry_count": 4, 00:15:00.445 "bdev_retry_count": 3, 00:15:00.445 "transport_ack_timeout": 0, 00:15:00.445 "ctrlr_loss_timeout_sec": 0, 00:15:00.445 "reconnect_delay_sec": 0, 00:15:00.445 "fast_io_fail_timeout_sec": 0, 00:15:00.445 "disable_auto_failback": false, 00:15:00.445 "generate_uuids": false, 00:15:00.445 "transport_tos": 0, 00:15:00.445 "nvme_error_stat": false, 00:15:00.445 "rdma_srq_size": 0, 00:15:00.445 "io_path_stat": false, 00:15:00.445 "allow_accel_sequence": false, 00:15:00.445 "rdma_max_cq_size": 0, 00:15:00.445 "rdma_cm_event_timeout_ms": 0, 00:15:00.445 "dhchap_digests": [ 00:15:00.445 "sha256", 00:15:00.445 "sha384", 00:15:00.445 "sha512" 00:15:00.445 ], 00:15:00.445 "dhchap_dhgroups": [ 00:15:00.445 "null", 00:15:00.445 "ffdhe2048", 00:15:00.445 "ffdhe3072", 00:15:00.445 "ffdhe4096", 00:15:00.445 "ffdhe6144", 00:15:00.445 "ffdhe8192" 00:15:00.445 ] 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_nvme_set_hotplug", 00:15:00.445 "params": { 00:15:00.445 "period_us": 100000, 00:15:00.445 "enable": false 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_malloc_create", 00:15:00.445 "params": { 00:15:00.445 "name": "malloc0", 00:15:00.445 "num_blocks": 8192, 00:15:00.445 "block_size": 4096, 00:15:00.445 "physical_block_size": 4096, 00:15:00.445 "uuid": "389b050e-e3be-427c-9f5b-812f50f7c2a4", 00:15:00.445 "optimal_io_boundary": 0, 00:15:00.445 "md_size": 0, 00:15:00.445 "dif_type": 0, 00:15:00.445 "dif_is_head_of_md": false, 00:15:00.445 "dif_pi_format": 0 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "bdev_wait_for_examine" 00:15:00.445 } 00:15:00.445 ] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "nbd", 00:15:00.445 "config": [] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "scheduler", 00:15:00.445 "config": [ 00:15:00.445 { 00:15:00.445 "method": "framework_set_scheduler", 00:15:00.445 "params": { 00:15:00.445 "name": "static" 00:15:00.445 } 00:15:00.445 } 00:15:00.445 ] 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "subsystem": "nvmf", 00:15:00.445 "config": [ 00:15:00.445 { 00:15:00.445 "method": "nvmf_set_config", 00:15:00.445 "params": { 00:15:00.445 "discovery_filter": "match_any", 00:15:00.445 "admin_cmd_passthru": { 00:15:00.445 "identify_ctrlr": false 00:15:00.445 }, 00:15:00.445 "dhchap_digests": [ 00:15:00.445 "sha256", 00:15:00.445 "sha384", 00:15:00.445 "sha512" 00:15:00.445 ], 00:15:00.445 "dhchap_dhgroups": [ 00:15:00.445 "null", 00:15:00.445 "ffdhe2048", 00:15:00.445 "ffdhe3072", 00:15:00.445 "ffdhe4096", 00:15:00.445 "ffdhe6144", 00:15:00.445 "ffdhe8192" 00:15:00.445 ] 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_set_max_subsystems", 00:15:00.445 "params": { 00:15:00.445 "max_subsystems": 1024 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_set_crdt", 00:15:00.445 "params": { 00:15:00.445 "crdt1": 0, 00:15:00.445 "crdt2": 0, 00:15:00.445 "crdt3": 0 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_create_transport", 00:15:00.445 "params": { 00:15:00.445 "trtype": "TCP", 00:15:00.445 "max_queue_depth": 128, 00:15:00.445 "max_io_qpairs_per_ctrlr": 127, 00:15:00.445 "in_capsule_data_size": 4096, 00:15:00.445 "max_io_size": 131072, 00:15:00.445 "io_unit_size": 131072, 00:15:00.445 "max_aq_depth": 128, 00:15:00.445 "num_shared_buffers": 511, 00:15:00.445 "buf_cache_size": 4294967295, 00:15:00.445 "dif_insert_or_strip": false, 00:15:00.445 "zcopy": false, 00:15:00.445 "c2h_success": false, 00:15:00.445 "sock_priority": 0, 00:15:00.445 "abort_timeout_sec": 1, 00:15:00.445 "ack_timeout": 0, 00:15:00.445 "data_wr_pool_size": 0 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_create_subsystem", 00:15:00.445 "params": { 00:15:00.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.445 "allow_any_host": false, 00:15:00.445 "serial_number": "SPDK00000000000001", 00:15:00.445 "model_number": "SPDK bdev Controller", 00:15:00.445 "max_namespaces": 10, 00:15:00.445 "min_cntlid": 1, 00:15:00.445 "max_cntlid": 65519, 00:15:00.445 "ana_reporting": false 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_subsystem_add_host", 00:15:00.445 "params": { 00:15:00.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.445 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.445 "psk": "key0" 00:15:00.445 } 00:15:00.445 }, 00:15:00.445 { 00:15:00.445 "method": "nvmf_subsystem_add_ns", 00:15:00.445 "params": { 00:15:00.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.445 "namespace": { 00:15:00.445 "nsid": 1, 00:15:00.446 "bdev_name": "malloc0", 00:15:00.446 "nguid": "389B050EE3BE427C9F5B812F50F7C2A4", 00:15:00.446 "uuid": "389b050e-e3be-427c-9f5b-812f50f7c2a4", 00:15:00.446 "no_auto_visible": false 00:15:00.446 } 00:15:00.446 } 00:15:00.446 }, 00:15:00.446 { 00:15:00.446 "method": "nvmf_subsystem_add_listener", 00:15:00.446 "params": { 00:15:00.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.446 "listen_address": { 00:15:00.446 "trtype": "TCP", 00:15:00.446 "adrfam": "IPv4", 00:15:00.446 "traddr": "10.0.0.3", 00:15:00.446 "trsvcid": "4420" 00:15:00.446 }, 00:15:00.446 "secure_channel": true 00:15:00.446 } 00:15:00.446 } 00:15:00.446 ] 00:15:00.446 } 00:15:00.446 ] 00:15:00.446 }' 00:15:00.446 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:00.705 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:00.705 "subsystems": [ 00:15:00.705 { 00:15:00.705 "subsystem": "keyring", 00:15:00.705 "config": [ 00:15:00.705 { 00:15:00.705 "method": "keyring_file_add_key", 00:15:00.705 "params": { 00:15:00.705 "name": "key0", 00:15:00.705 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:00.705 } 00:15:00.705 } 00:15:00.705 ] 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "subsystem": "iobuf", 00:15:00.705 "config": [ 00:15:00.705 { 00:15:00.705 "method": "iobuf_set_options", 00:15:00.705 "params": { 00:15:00.705 "small_pool_count": 8192, 00:15:00.705 "large_pool_count": 1024, 00:15:00.705 "small_bufsize": 8192, 00:15:00.705 "large_bufsize": 135168, 00:15:00.705 "enable_numa": false 00:15:00.705 } 00:15:00.705 } 00:15:00.705 ] 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "subsystem": "sock", 00:15:00.705 "config": [ 00:15:00.705 { 00:15:00.705 "method": "sock_set_default_impl", 00:15:00.705 "params": { 00:15:00.705 "impl_name": "uring" 00:15:00.705 } 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "method": "sock_impl_set_options", 00:15:00.705 "params": { 00:15:00.705 "impl_name": "ssl", 00:15:00.705 "recv_buf_size": 4096, 00:15:00.705 "send_buf_size": 4096, 00:15:00.705 "enable_recv_pipe": true, 00:15:00.705 "enable_quickack": false, 00:15:00.705 "enable_placement_id": 0, 00:15:00.705 "enable_zerocopy_send_server": true, 00:15:00.705 "enable_zerocopy_send_client": false, 00:15:00.705 "zerocopy_threshold": 0, 00:15:00.705 "tls_version": 0, 00:15:00.705 "enable_ktls": false 00:15:00.705 } 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "method": "sock_impl_set_options", 00:15:00.705 "params": { 00:15:00.705 "impl_name": "posix", 00:15:00.705 "recv_buf_size": 2097152, 00:15:00.705 "send_buf_size": 2097152, 00:15:00.705 "enable_recv_pipe": true, 00:15:00.705 "enable_quickack": false, 00:15:00.705 "enable_placement_id": 0, 00:15:00.705 "enable_zerocopy_send_server": true, 00:15:00.705 "enable_zerocopy_send_client": false, 00:15:00.705 "zerocopy_threshold": 0, 00:15:00.705 "tls_version": 0, 00:15:00.705 "enable_ktls": false 00:15:00.705 } 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "method": "sock_impl_set_options", 00:15:00.705 "params": { 00:15:00.705 "impl_name": "uring", 00:15:00.705 "recv_buf_size": 2097152, 00:15:00.705 "send_buf_size": 2097152, 00:15:00.705 "enable_recv_pipe": true, 00:15:00.705 "enable_quickack": false, 00:15:00.705 "enable_placement_id": 0, 00:15:00.705 "enable_zerocopy_send_server": false, 00:15:00.705 "enable_zerocopy_send_client": false, 00:15:00.705 "zerocopy_threshold": 0, 00:15:00.705 "tls_version": 0, 00:15:00.705 "enable_ktls": false 00:15:00.705 } 00:15:00.705 } 00:15:00.705 ] 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "subsystem": "vmd", 00:15:00.705 "config": [] 00:15:00.705 }, 00:15:00.705 { 00:15:00.705 "subsystem": "accel", 00:15:00.705 "config": [ 00:15:00.705 { 00:15:00.705 "method": "accel_set_options", 00:15:00.705 "params": { 00:15:00.705 "small_cache_size": 128, 00:15:00.705 "large_cache_size": 16, 00:15:00.705 "task_count": 2048, 00:15:00.705 "sequence_count": 2048, 00:15:00.706 "buf_count": 2048 00:15:00.706 } 00:15:00.706 } 00:15:00.706 ] 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "subsystem": "bdev", 00:15:00.706 "config": [ 00:15:00.706 { 00:15:00.706 "method": "bdev_set_options", 00:15:00.706 "params": { 00:15:00.706 "bdev_io_pool_size": 65535, 00:15:00.706 "bdev_io_cache_size": 256, 00:15:00.706 "bdev_auto_examine": true, 00:15:00.706 "iobuf_small_cache_size": 128, 00:15:00.706 "iobuf_large_cache_size": 16 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_raid_set_options", 00:15:00.706 "params": { 00:15:00.706 "process_window_size_kb": 1024, 00:15:00.706 "process_max_bandwidth_mb_sec": 0 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_iscsi_set_options", 00:15:00.706 "params": { 00:15:00.706 "timeout_sec": 30 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_nvme_set_options", 00:15:00.706 "params": { 00:15:00.706 "action_on_timeout": "none", 00:15:00.706 "timeout_us": 0, 00:15:00.706 "timeout_admin_us": 0, 00:15:00.706 "keep_alive_timeout_ms": 10000, 00:15:00.706 "arbitration_burst": 0, 00:15:00.706 "low_priority_weight": 0, 00:15:00.706 "medium_priority_weight": 0, 00:15:00.706 "high_priority_weight": 0, 00:15:00.706 "nvme_adminq_poll_period_us": 10000, 00:15:00.706 "nvme_ioq_poll_period_us": 0, 00:15:00.706 "io_queue_requests": 512, 00:15:00.706 "delay_cmd_submit": true, 00:15:00.706 "transport_retry_count": 4, 00:15:00.706 "bdev_retry_count": 3, 00:15:00.706 "transport_ack_timeout": 0, 00:15:00.706 "ctrlr_loss_timeout_sec": 0, 00:15:00.706 "reconnect_delay_sec": 0, 00:15:00.706 "fast_io_fail_timeout_sec": 0, 00:15:00.706 "disable_auto_failback": false, 00:15:00.706 "generate_uuids": false, 00:15:00.706 "transport_tos": 0, 00:15:00.706 "nvme_error_stat": false, 00:15:00.706 "rdma_srq_size": 0, 00:15:00.706 "io_path_stat": false, 00:15:00.706 "allow_accel_sequence": false, 00:15:00.706 "rdma_max_cq_size": 0, 00:15:00.706 "rdma_cm_event_timeout_ms": 0, 00:15:00.706 "dhchap_digests": [ 00:15:00.706 "sha256", 00:15:00.706 "sha384", 00:15:00.706 "sha512" 00:15:00.706 ], 00:15:00.706 "dhchap_dhgroups": [ 00:15:00.706 "null", 00:15:00.706 "ffdhe2048", 00:15:00.706 "ffdhe3072", 00:15:00.706 "ffdhe4096", 00:15:00.706 "ffdhe6144", 00:15:00.706 "ffdhe8192" 00:15:00.706 ] 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_nvme_attach_controller", 00:15:00.706 "params": { 00:15:00.706 "name": "TLSTEST", 00:15:00.706 "trtype": "TCP", 00:15:00.706 "adrfam": "IPv4", 00:15:00.706 "traddr": "10.0.0.3", 00:15:00.706 "trsvcid": "4420", 00:15:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.706 "prchk_reftag": false, 00:15:00.706 "prchk_guard": false, 00:15:00.706 "ctrlr_loss_timeout_sec": 0, 00:15:00.706 "reconnect_delay_sec": 0, 00:15:00.706 "fast_io_fail_timeout_sec": 0, 00:15:00.706 "psk": "key0", 00:15:00.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.706 "hdgst": false, 00:15:00.706 "ddgst": false, 00:15:00.706 "multipath": "multipath" 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_nvme_set_hotplug", 00:15:00.706 "params": { 00:15:00.706 "period_us": 100000, 00:15:00.706 "enable": false 00:15:00.706 } 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "method": "bdev_wait_for_examine" 00:15:00.706 } 00:15:00.706 ] 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "subsystem": "nbd", 00:15:00.706 "config": [] 00:15:00.706 } 00:15:00.706 ] 00:15:00.706 }' 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72155 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72155 ']' 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72155 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72155 00:15:00.706 killing process with pid 72155 00:15:00.706 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.706 00:15:00.706 Latency(us) 00:15:00.706 [2024-11-29T13:01:32.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.706 [2024-11-29T13:01:32.221Z] =================================================================================================================== 00:15:00.706 [2024-11-29T13:01:32.221Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72155' 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72155 00:15:00.706 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72155 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72101 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72101 ']' 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72101 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72101 00:15:00.966 killing process with pid 72101 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72101' 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72101 00:15:00.966 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72101 00:15:01.225 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:01.225 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.225 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.225 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.225 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:01.225 "subsystems": [ 00:15:01.225 { 00:15:01.225 "subsystem": "keyring", 00:15:01.225 "config": [ 00:15:01.225 { 00:15:01.225 "method": "keyring_file_add_key", 00:15:01.225 "params": { 00:15:01.225 "name": "key0", 00:15:01.225 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:01.225 } 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "subsystem": "iobuf", 00:15:01.225 "config": [ 00:15:01.225 { 00:15:01.225 "method": "iobuf_set_options", 00:15:01.225 "params": { 00:15:01.225 "small_pool_count": 8192, 00:15:01.225 "large_pool_count": 1024, 00:15:01.225 "small_bufsize": 8192, 00:15:01.225 "large_bufsize": 135168, 00:15:01.225 "enable_numa": false 00:15:01.225 } 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "subsystem": "sock", 00:15:01.225 "config": [ 00:15:01.225 { 00:15:01.225 "method": "sock_set_default_impl", 00:15:01.225 "params": { 00:15:01.225 "impl_name": "uring" 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "sock_impl_set_options", 00:15:01.225 "params": { 00:15:01.225 "impl_name": "ssl", 00:15:01.225 "recv_buf_size": 4096, 00:15:01.225 "send_buf_size": 4096, 00:15:01.225 "enable_recv_pipe": true, 00:15:01.225 "enable_quickack": false, 00:15:01.225 "enable_placement_id": 0, 00:15:01.225 "enable_zerocopy_send_server": true, 00:15:01.225 "enable_zerocopy_send_client": false, 00:15:01.225 "zerocopy_threshold": 0, 00:15:01.225 "tls_version": 0, 00:15:01.225 "enable_ktls": false 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "sock_impl_set_options", 00:15:01.225 "params": { 00:15:01.225 "impl_name": "posix", 00:15:01.225 "recv_buf_size": 2097152, 00:15:01.225 "send_buf_size": 2097152, 00:15:01.225 "enable_recv_pipe": true, 00:15:01.225 "enable_quickack": false, 00:15:01.225 "enable_placement_id": 0, 00:15:01.225 "enable_zerocopy_send_server": true, 00:15:01.225 "enable_zerocopy_send_client": false, 00:15:01.225 "zerocopy_threshold": 0, 00:15:01.225 "tls_version": 0, 00:15:01.225 "enable_ktls": false 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "sock_impl_set_options", 00:15:01.225 "params": { 00:15:01.225 "impl_name": "uring", 00:15:01.225 "recv_buf_size": 2097152, 00:15:01.225 "send_buf_size": 2097152, 00:15:01.225 "enable_recv_pipe": true, 00:15:01.225 "enable_quickack": false, 00:15:01.225 "enable_placement_id": 0, 00:15:01.225 "enable_zerocopy_send_server": false, 00:15:01.225 "enable_zerocopy_send_client": false, 00:15:01.225 "zerocopy_threshold": 0, 00:15:01.225 "tls_version": 0, 00:15:01.225 "enable_ktls": false 00:15:01.225 } 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "subsystem": "vmd", 00:15:01.225 "config": [] 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "subsystem": "accel", 00:15:01.225 "config": [ 00:15:01.225 { 00:15:01.225 "method": "accel_set_options", 00:15:01.225 "params": { 00:15:01.225 "small_cache_size": 128, 00:15:01.225 "large_cache_size": 16, 00:15:01.225 "task_count": 2048, 00:15:01.225 "sequence_count": 2048, 00:15:01.225 "buf_count": 2048 00:15:01.225 } 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "subsystem": "bdev", 00:15:01.225 "config": [ 00:15:01.225 { 00:15:01.225 "method": "bdev_set_options", 00:15:01.225 "params": { 00:15:01.225 "bdev_io_pool_size": 65535, 00:15:01.225 "bdev_io_cache_size": 256, 00:15:01.225 "bdev_auto_examine": true, 00:15:01.225 "iobuf_small_cache_size": 128, 00:15:01.225 "iobuf_large_cache_size": 16 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "bdev_raid_set_options", 00:15:01.225 "params": { 00:15:01.225 "process_window_size_kb": 1024, 00:15:01.225 "process_max_bandwidth_mb_sec": 0 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "bdev_iscsi_set_options", 00:15:01.225 "params": { 00:15:01.225 "timeout_sec": 30 00:15:01.225 } 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "method": "bdev_nvme_set_options", 00:15:01.225 "params": { 00:15:01.225 "action_on_timeout": "none", 00:15:01.225 "timeout_us": 0, 00:15:01.225 "timeout_admin_us": 0, 00:15:01.225 "keep_alive_timeout_ms": 10000, 00:15:01.225 "arbitration_burst": 0, 00:15:01.225 "low_priority_weight": 0, 00:15:01.225 "medium_priority_weight": 0, 00:15:01.226 "high_priority_weight": 0, 00:15:01.226 "nvme_adminq_poll_period_us": 10000, 00:15:01.226 "nvme_ioq_poll_period_us": 0, 00:15:01.226 "io_queue_requests": 0, 00:15:01.226 "delay_cmd_submit": true, 00:15:01.226 "transport_retry_count": 4, 00:15:01.226 "bdev_retry_count": 3, 00:15:01.226 "transport_ack_timeout": 0, 00:15:01.226 "ctrlr_loss_timeout_sec": 0, 00:15:01.226 "reconnect_delay_sec": 0, 00:15:01.226 "fast_io_fail_timeout_sec": 0, 00:15:01.226 "disable_auto_failback": false, 00:15:01.226 "generate_uuids": false, 00:15:01.226 "transport_tos": 0, 00:15:01.226 "nvme_error_stat": false, 00:15:01.226 "rdma_srq_size": 0, 00:15:01.226 "io_path_stat": false, 00:15:01.226 "allow_accel_sequence": false, 00:15:01.226 "rdma_max_cq_size": 0, 00:15:01.226 "rdma_cm_event_timeout_ms": 0, 00:15:01.226 "dhchap_digests": [ 00:15:01.226 "sha256", 00:15:01.226 "sha384", 00:15:01.226 "sha512" 00:15:01.226 ], 00:15:01.226 "dhchap_dhgroups": [ 00:15:01.226 "null", 00:15:01.226 "ffdhe2048", 00:15:01.226 "ffdhe3072", 00:15:01.226 "ffdhe4096", 00:15:01.226 "ffdhe6144", 00:15:01.226 "ffdhe8192" 00:15:01.226 ] 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "bdev_nvme_set_hotplug", 00:15:01.226 "params": { 00:15:01.226 "period_us": 100000, 00:15:01.226 "enable": false 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "bdev_malloc_create", 00:15:01.226 "params": { 00:15:01.226 "name": "malloc0", 00:15:01.226 "num_blocks": 8192, 00:15:01.226 "block_size": 4096, 00:15:01.226 "physical_block_size": 4096, 00:15:01.226 "uuid": "389b050e-e3be-427c-9f5b-812f50f7c2a4", 00:15:01.226 "optimal_io_boundary": 0, 00:15:01.226 "md_size": 0, 00:15:01.226 "dif_type": 0, 00:15:01.226 "dif_is_head_of_md": false, 00:15:01.226 "dif_pi_format": 0 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "bdev_wait_for_examine" 00:15:01.226 } 00:15:01.226 ] 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "subsystem": "nbd", 00:15:01.226 "config": [] 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "subsystem": "scheduler", 00:15:01.226 "config": [ 00:15:01.226 { 00:15:01.226 "method": "framework_set_scheduler", 00:15:01.226 "params": { 00:15:01.226 "name": "static" 00:15:01.226 } 00:15:01.226 } 00:15:01.226 ] 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "subsystem": "nvmf", 00:15:01.226 "config": [ 00:15:01.226 { 00:15:01.226 "method": "nvmf_set_config", 00:15:01.226 "params": { 00:15:01.226 "discovery_filter": "match_any", 00:15:01.226 "admin_cmd_passthru": { 00:15:01.226 "identify_ctrlr": false 00:15:01.226 }, 00:15:01.226 "dhchap_digests": [ 00:15:01.226 "sha256", 00:15:01.226 "sha384", 00:15:01.226 "sha512" 00:15:01.226 ], 00:15:01.226 "dhchap_dhgroups": [ 00:15:01.226 "null", 00:15:01.226 "ffdhe2048", 00:15:01.226 "ffdhe3072", 00:15:01.226 "ffdhe4096", 00:15:01.226 "ffdhe6144", 00:15:01.226 "ffdhe8192" 00:15:01.226 ] 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_set_max_subsystems", 00:15:01.226 "params": { 00:15:01.226 "max_subsystems": 1024 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_set_crdt", 00:15:01.226 "params": { 00:15:01.226 "crdt1": 0, 00:15:01.226 "crdt2": 0, 00:15:01.226 "crdt3": 0 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_create_transport", 00:15:01.226 "params": { 00:15:01.226 "trtype": "TCP", 00:15:01.226 "max_queue_depth": 128, 00:15:01.226 "max_io_qpairs_per_ctrlr": 127, 00:15:01.226 "in_capsule_data_size": 4096, 00:15:01.226 "max_io_size": 131072, 00:15:01.226 "io_unit_size": 131072, 00:15:01.226 "max_aq_depth": 128, 00:15:01.226 "num_shared_buffers": 511, 00:15:01.226 "buf_cache_size": 4294967295, 00:15:01.226 "dif_insert_or_strip": false, 00:15:01.226 "zcopy": false, 00:15:01.226 "c2h_success": false, 00:15:01.226 "sock_priority": 0, 00:15:01.226 "abort_timeout_sec": 1, 00:15:01.226 "ack_timeout": 0, 00:15:01.226 "data_wr_pool_size": 0 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_create_subsystem", 00:15:01.226 "params": { 00:15:01.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.226 "allow_any_host": false, 00:15:01.226 "serial_number": "SPDK00000000000001", 00:15:01.226 "model_number": "SPDK bdev Controller", 00:15:01.226 "max_namespaces": 10, 00:15:01.226 "min_cntlid": 1, 00:15:01.226 "max_cntlid": 65519, 00:15:01.226 "ana_reporting": false 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_subsystem_add_host", 00:15:01.226 "params": { 00:15:01.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.226 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.226 "psk": "key0" 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_subsystem_add_ns", 00:15:01.226 "params": { 00:15:01.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.226 "namespace": { 00:15:01.226 "nsid": 1, 00:15:01.226 "bdev_name": "malloc0", 00:15:01.226 "nguid": "389B050EE3BE427C9F5B812F50F7C2A4", 00:15:01.226 "uuid": "389b050e-e3be-427c-9f5b-812f50f7c2a4", 00:15:01.226 "no_auto_visible": false 00:15:01.226 } 00:15:01.226 } 00:15:01.226 }, 00:15:01.226 { 00:15:01.226 "method": "nvmf_subsystem_add_listener", 00:15:01.226 "params": { 00:15:01.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.226 "listen_address": { 00:15:01.226 "trtype": "TCP", 00:15:01.226 "adrfam": "IPv4", 00:15:01.226 "traddr": "10.0.0.3", 00:15:01.226 "trsvcid": "4420" 00:15:01.226 }, 00:15:01.226 "secure_channel": true 00:15:01.226 } 00:15:01.226 } 00:15:01.226 ] 00:15:01.226 } 00:15:01.226 ] 00:15:01.227 }' 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72202 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72202 00:15:01.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72202 ']' 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.227 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.227 [2024-11-29 13:01:32.649826] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:01.227 [2024-11-29 13:01:32.650252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.485 [2024-11-29 13:01:32.803211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.485 [2024-11-29 13:01:32.871716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.485 [2024-11-29 13:01:32.871777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.485 [2024-11-29 13:01:32.871805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.485 [2024-11-29 13:01:32.871813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.485 [2024-11-29 13:01:32.871820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.485 [2024-11-29 13:01:32.872317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.744 [2024-11-29 13:01:33.046762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.744 [2024-11-29 13:01:33.134684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.744 [2024-11-29 13:01:33.166608] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.744 [2024-11-29 13:01:33.166904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72234 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72234 /var/tmp/bdevperf.sock 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72234 ']' 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:02.312 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:02.312 "subsystems": [ 00:15:02.312 { 00:15:02.312 "subsystem": "keyring", 00:15:02.312 "config": [ 00:15:02.312 { 00:15:02.312 "method": "keyring_file_add_key", 00:15:02.312 "params": { 00:15:02.312 "name": "key0", 00:15:02.312 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:02.312 } 00:15:02.312 } 00:15:02.312 ] 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "subsystem": "iobuf", 00:15:02.312 "config": [ 00:15:02.312 { 00:15:02.312 "method": "iobuf_set_options", 00:15:02.312 "params": { 00:15:02.312 "small_pool_count": 8192, 00:15:02.312 "large_pool_count": 1024, 00:15:02.312 "small_bufsize": 8192, 00:15:02.312 "large_bufsize": 135168, 00:15:02.312 "enable_numa": false 00:15:02.312 } 00:15:02.312 } 00:15:02.312 ] 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "subsystem": "sock", 00:15:02.312 "config": [ 00:15:02.312 { 00:15:02.312 "method": "sock_set_default_impl", 00:15:02.312 "params": { 00:15:02.312 "impl_name": "uring" 00:15:02.312 } 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "method": "sock_impl_set_options", 00:15:02.312 "params": { 00:15:02.312 "impl_name": "ssl", 00:15:02.312 "recv_buf_size": 4096, 00:15:02.312 "send_buf_size": 4096, 00:15:02.312 "enable_recv_pipe": true, 00:15:02.312 "enable_quickack": false, 00:15:02.312 "enable_placement_id": 0, 00:15:02.312 "enable_zerocopy_send_server": true, 00:15:02.312 "enable_zerocopy_send_client": false, 00:15:02.312 "zerocopy_threshold": 0, 00:15:02.312 "tls_version": 0, 00:15:02.312 "enable_ktls": false 00:15:02.312 } 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "method": "sock_impl_set_options", 00:15:02.312 "params": { 00:15:02.312 "impl_name": "posix", 00:15:02.312 "recv_buf_size": 2097152, 00:15:02.312 "send_buf_size": 2097152, 00:15:02.312 "enable_recv_pipe": true, 00:15:02.312 "enable_quickack": false, 00:15:02.312 "enable_placement_id": 0, 00:15:02.312 "enable_zerocopy_send_server": true, 00:15:02.312 "enable_zerocopy_send_client": false, 00:15:02.312 "zerocopy_threshold": 0, 00:15:02.312 "tls_version": 0, 00:15:02.312 "enable_ktls": false 00:15:02.312 } 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "method": "sock_impl_set_options", 00:15:02.312 "params": { 00:15:02.312 "impl_name": "uring", 00:15:02.312 "recv_buf_size": 2097152, 00:15:02.312 "send_buf_size": 2097152, 00:15:02.312 "enable_recv_pipe": true, 00:15:02.312 "enable_quickack": false, 00:15:02.312 "enable_placement_id": 0, 00:15:02.312 "enable_zerocopy_send_server": false, 00:15:02.312 "enable_zerocopy_send_client": false, 00:15:02.312 "zerocopy_threshold": 0, 00:15:02.312 "tls_version": 0, 00:15:02.312 "enable_ktls": false 00:15:02.312 } 00:15:02.312 } 00:15:02.312 ] 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "subsystem": "vmd", 00:15:02.312 "config": [] 00:15:02.312 }, 00:15:02.312 { 00:15:02.312 "subsystem": "accel", 00:15:02.312 "config": [ 00:15:02.312 { 00:15:02.312 "method": "accel_set_options", 00:15:02.312 "params": { 00:15:02.312 "small_cache_size": 128, 00:15:02.312 "large_cache_size": 16, 00:15:02.312 "task_count": 2048, 00:15:02.312 "sequence_count": 2048, 00:15:02.312 "buf_count": 2048 00:15:02.312 } 00:15:02.312 } 00:15:02.312 ] 00:15:02.312 }, 00:15:02.312 { 00:15:02.313 "subsystem": "bdev", 00:15:02.313 "config": [ 00:15:02.313 { 00:15:02.313 "method": "bdev_set_options", 00:15:02.313 "params": { 00:15:02.313 "bdev_io_pool_size": 65535, 00:15:02.313 "bdev_io_cache_size": 256, 00:15:02.313 "bdev_auto_examine": true, 00:15:02.313 "iobuf_small_cache_size": 128, 00:15:02.313 "iobuf_large_cache_size": 16 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_raid_set_options", 00:15:02.313 "params": { 00:15:02.313 "process_window_size_kb": 1024, 00:15:02.313 "process_max_bandwidth_mb_sec": 0 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_iscsi_set_options", 00:15:02.313 "params": { 00:15:02.313 "timeout_sec": 30 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_nvme_set_options", 00:15:02.313 "params": { 00:15:02.313 "action_on_timeout": "none", 00:15:02.313 "timeout_us": 0, 00:15:02.313 "timeout_admin_us": 0, 00:15:02.313 "keep_alive_timeout_ms": 10000, 00:15:02.313 "arbitration_burst": 0, 00:15:02.313 "low_priority_weight": 0, 00:15:02.313 "medium_priority_weight": 0, 00:15:02.313 "high_priority_weight": 0, 00:15:02.313 "nvme_adminq_poll_period_us": 10000, 00:15:02.313 "nvme_ioq_poll_period_us": 0, 00:15:02.313 "io_queue_requests": 512, 00:15:02.313 "delay_cmd_submit": true, 00:15:02.313 "transport_retry_count": 4, 00:15:02.313 "bdev_retry_count": 3, 00:15:02.313 "transport_ack_timeout": 0, 00:15:02.313 "ctrlr_loss_timeout_sec": 0, 00:15:02.313 "reconnect_delay_sec": 0, 00:15:02.313 "fast_io_fail_timeout_sec": 0, 00:15:02.313 "disable_auto_failback": false, 00:15:02.313 "generate_uuids": false, 00:15:02.313 "transport_tos": 0, 00:15:02.313 "nvme_error_stat": false, 00:15:02.313 "rdma_srq_size": 0, 00:15:02.313 "io_path_stat": false, 00:15:02.313 "allow_accel_sequence": false, 00:15:02.313 "rdma_max_cq_size": 0, 00:15:02.313 "rdma_cm_event_timeout_ms": 0, 00:15:02.313 "dhchap_digests": [ 00:15:02.313 "sha256", 00:15:02.313 "sha384", 00:15:02.313 "sha512" 00:15:02.313 ], 00:15:02.313 "dhchap_dhgroups": [ 00:15:02.313 "null", 00:15:02.313 "ffdhe2048", 00:15:02.313 "ffdhe3072", 00:15:02.313 "ffdhe4096", 00:15:02.313 "ffdhe6144", 00:15:02.313 "ffdhe8192" 00:15:02.313 ] 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_nvme_attach_controller", 00:15:02.313 "params": { 00:15:02.313 "name": "TLSTEST", 00:15:02.313 "trtype": "TCP", 00:15:02.313 "adrfam": "IPv4", 00:15:02.313 "traddr": "10.0.0.3", 00:15:02.313 "trsvcid": "4420", 00:15:02.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.313 "prchk_reftag": false, 00:15:02.313 "prchk_guard": false, 00:15:02.313 "ctrlr_loss_timeout_sec": 0, 00:15:02.313 "reconnect_delay_sec": 0, 00:15:02.313 "fast_io_fail_timeout_sec": 0, 00:15:02.313 "psk": "key0", 00:15:02.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.313 "hdgst": false, 00:15:02.313 "ddgst": false, 00:15:02.313 "multipath": "multipath" 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_nvme_set_hotplug", 00:15:02.313 "params": { 00:15:02.313 "period_us": 100000, 00:15:02.313 "enable": false 00:15:02.313 } 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "method": "bdev_wait_for_examine" 00:15:02.313 } 00:15:02.313 ] 00:15:02.313 }, 00:15:02.313 { 00:15:02.313 "subsystem": "nbd", 00:15:02.313 "config": [] 00:15:02.313 } 00:15:02.313 ] 00:15:02.313 }' 00:15:02.572 [2024-11-29 13:01:33.866133] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:02.572 [2024-11-29 13:01:33.866252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:15:02.572 [2024-11-29 13:01:34.020499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.831 [2024-11-29 13:01:34.113597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.831 [2024-11-29 13:01:34.276557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.089 [2024-11-29 13:01:34.346399] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.656 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.656 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.656 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:03.656 Running I/O for 10 seconds... 00:15:05.617 3456.00 IOPS, 13.50 MiB/s [2024-11-29T13:01:38.066Z] 3452.50 IOPS, 13.49 MiB/s [2024-11-29T13:01:39.442Z] 3456.00 IOPS, 13.50 MiB/s [2024-11-29T13:01:40.378Z] 3456.00 IOPS, 13.50 MiB/s [2024-11-29T13:01:41.314Z] 3478.80 IOPS, 13.59 MiB/s [2024-11-29T13:01:42.251Z] 3477.33 IOPS, 13.58 MiB/s [2024-11-29T13:01:43.187Z] 3486.00 IOPS, 13.62 MiB/s [2024-11-29T13:01:44.119Z] 3498.38 IOPS, 13.67 MiB/s [2024-11-29T13:01:45.063Z] 3547.89 IOPS, 13.86 MiB/s [2024-11-29T13:01:45.063Z] 3584.70 IOPS, 14.00 MiB/s 00:15:13.548 Latency(us) 00:15:13.548 [2024-11-29T13:01:45.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.548 Verification LBA range: start 0x0 length 0x2000 00:15:13.548 TLSTESTn1 : 10.03 3587.77 14.01 0.00 0.00 35599.78 9175.04 25261.15 00:15:13.548 [2024-11-29T13:01:45.063Z] =================================================================================================================== 00:15:13.548 [2024-11-29T13:01:45.063Z] Total : 3587.77 14.01 0.00 0.00 35599.78 9175.04 25261.15 00:15:13.548 { 00:15:13.548 "results": [ 00:15:13.548 { 00:15:13.548 "job": "TLSTESTn1", 00:15:13.548 "core_mask": "0x4", 00:15:13.548 "workload": "verify", 00:15:13.548 "status": "finished", 00:15:13.548 "verify_range": { 00:15:13.548 "start": 0, 00:15:13.548 "length": 8192 00:15:13.548 }, 00:15:13.548 "queue_depth": 128, 00:15:13.548 "io_size": 4096, 00:15:13.548 "runtime": 10.026563, 00:15:13.548 "iops": 3587.7698070615024, 00:15:13.548 "mibps": 14.014725808833994, 00:15:13.548 "io_failed": 0, 00:15:13.548 "io_timeout": 0, 00:15:13.548 "avg_latency_us": 35599.780478591274, 00:15:13.548 "min_latency_us": 9175.04, 00:15:13.548 "max_latency_us": 25261.14909090909 00:15:13.548 } 00:15:13.548 ], 00:15:13.548 "core_count": 1 00:15:13.548 } 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72234 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72234 ']' 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72234 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72234 00:15:13.805 killing process with pid 72234 00:15:13.805 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.805 00:15:13.805 Latency(us) 00:15:13.805 [2024-11-29T13:01:45.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.805 [2024-11-29T13:01:45.320Z] =================================================================================================================== 00:15:13.805 [2024-11-29T13:01:45.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72234' 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72234 00:15:13.805 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72234 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72202 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72202 ']' 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72202 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72202 00:15:14.063 killing process with pid 72202 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:14.063 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72202' 00:15:14.064 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72202 00:15:14.064 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72202 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72374 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72374 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72374 ']' 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.322 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.322 [2024-11-29 13:01:45.693877] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:14.322 [2024-11-29 13:01:45.694352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.581 [2024-11-29 13:01:45.844465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.581 [2024-11-29 13:01:45.913834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.581 [2024-11-29 13:01:45.913928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.581 [2024-11-29 13:01:45.913955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.581 [2024-11-29 13:01:45.913965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.581 [2024-11-29 13:01:45.913975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.581 [2024-11-29 13:01:45.914502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.581 [2024-11-29 13:01:45.972099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.nIbxcjPvpI 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIbxcjPvpI 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:15.515 [2024-11-29 13:01:46.979282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.515 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:15.774 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:16.341 [2024-11-29 13:01:47.551458] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.341 [2024-11-29 13:01:47.551806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:16.341 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:16.341 malloc0 00:15:16.341 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:16.599 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:15:16.858 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72430 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72430 /var/tmp/bdevperf.sock 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72430 ']' 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.116 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.116 [2024-11-29 13:01:48.613756] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:17.116 [2024-11-29 13:01:48.614246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:15:17.375 [2024-11-29 13:01:48.763378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.375 [2024-11-29 13:01:48.827283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.375 [2024-11-29 13:01:48.882154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.634 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.634 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:17.634 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:15:17.893 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:18.150 [2024-11-29 13:01:49.505035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.150 nvme0n1 00:15:18.150 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.408 Running I/O for 1 seconds... 00:15:19.344 4093.00 IOPS, 15.99 MiB/s 00:15:19.344 Latency(us) 00:15:19.344 [2024-11-29T13:01:50.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.344 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.344 Verification LBA range: start 0x0 length 0x2000 00:15:19.344 nvme0n1 : 1.02 4155.42 16.23 0.00 0.00 30530.65 5481.19 23950.43 00:15:19.344 [2024-11-29T13:01:50.859Z] =================================================================================================================== 00:15:19.344 [2024-11-29T13:01:50.859Z] Total : 4155.42 16.23 0.00 0.00 30530.65 5481.19 23950.43 00:15:19.344 { 00:15:19.344 "results": [ 00:15:19.344 { 00:15:19.344 "job": "nvme0n1", 00:15:19.344 "core_mask": "0x2", 00:15:19.344 "workload": "verify", 00:15:19.344 "status": "finished", 00:15:19.344 "verify_range": { 00:15:19.344 "start": 0, 00:15:19.344 "length": 8192 00:15:19.344 }, 00:15:19.344 "queue_depth": 128, 00:15:19.344 "io_size": 4096, 00:15:19.344 "runtime": 1.015783, 00:15:19.344 "iops": 4155.4150837334355, 00:15:19.344 "mibps": 16.232090170833732, 00:15:19.344 "io_failed": 0, 00:15:19.344 "io_timeout": 0, 00:15:19.344 "avg_latency_us": 30530.652958583705, 00:15:19.344 "min_latency_us": 5481.192727272727, 00:15:19.344 "max_latency_us": 23950.429090909092 00:15:19.344 } 00:15:19.344 ], 00:15:19.344 "core_count": 1 00:15:19.344 } 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72430 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72430 ']' 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72430 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72430 00:15:19.344 killing process with pid 72430 00:15:19.344 Received shutdown signal, test time was about 1.000000 seconds 00:15:19.344 00:15:19.344 Latency(us) 00:15:19.344 [2024-11-29T13:01:50.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.344 [2024-11-29T13:01:50.859Z] =================================================================================================================== 00:15:19.344 [2024-11-29T13:01:50.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72430' 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72430 00:15:19.344 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72430 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72374 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72374 ']' 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72374 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72374 00:15:19.612 killing process with pid 72374 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72374' 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72374 00:15:19.612 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72374 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72474 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72474 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72474 ']' 00:15:19.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.894 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.894 [2024-11-29 13:01:51.328020] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:19.894 [2024-11-29 13:01:51.328113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.152 [2024-11-29 13:01:51.468266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.152 [2024-11-29 13:01:51.523047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.152 [2024-11-29 13:01:51.523132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.152 [2024-11-29 13:01:51.523161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.152 [2024-11-29 13:01:51.523169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.152 [2024-11-29 13:01:51.523177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.152 [2024-11-29 13:01:51.523630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.152 [2024-11-29 13:01:51.576470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.152 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.152 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:20.152 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.152 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.152 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.411 [2024-11-29 13:01:51.693970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.411 malloc0 00:15:20.411 [2024-11-29 13:01:51.725991] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.411 [2024-11-29 13:01:51.726394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72499 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72499 /var/tmp/bdevperf.sock 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72499 ']' 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.411 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.411 [2024-11-29 13:01:51.813578] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:20.411 [2024-11-29 13:01:51.813986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72499 ] 00:15:20.670 [2024-11-29 13:01:51.960998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.670 [2024-11-29 13:01:52.029083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.670 [2024-11-29 13:01:52.100377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.670 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.670 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:20.670 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIbxcjPvpI 00:15:21.238 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:21.238 [2024-11-29 13:01:52.704130] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:21.497 nvme0n1 00:15:21.497 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.497 Running I/O for 1 seconds... 00:15:22.432 3875.00 IOPS, 15.14 MiB/s 00:15:22.432 Latency(us) 00:15:22.432 [2024-11-29T13:01:53.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.432 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:22.432 Verification LBA range: start 0x0 length 0x2000 00:15:22.432 nvme0n1 : 1.03 3899.29 15.23 0.00 0.00 32309.50 4557.73 24665.37 00:15:22.432 [2024-11-29T13:01:53.947Z] =================================================================================================================== 00:15:22.432 [2024-11-29T13:01:53.947Z] Total : 3899.29 15.23 0.00 0.00 32309.50 4557.73 24665.37 00:15:22.432 { 00:15:22.432 "results": [ 00:15:22.432 { 00:15:22.432 "job": "nvme0n1", 00:15:22.432 "core_mask": "0x2", 00:15:22.432 "workload": "verify", 00:15:22.432 "status": "finished", 00:15:22.432 "verify_range": { 00:15:22.432 "start": 0, 00:15:22.432 "length": 8192 00:15:22.432 }, 00:15:22.432 "queue_depth": 128, 00:15:22.432 "io_size": 4096, 00:15:22.432 "runtime": 1.026596, 00:15:22.432 "iops": 3899.2943670148725, 00:15:22.432 "mibps": 15.231618621151846, 00:15:22.432 "io_failed": 0, 00:15:22.432 "io_timeout": 0, 00:15:22.432 "avg_latency_us": 32309.502505847882, 00:15:22.432 "min_latency_us": 4557.730909090909, 00:15:22.432 "max_latency_us": 24665.36727272727 00:15:22.432 } 00:15:22.432 ], 00:15:22.432 "core_count": 1 00:15:22.432 } 00:15:22.690 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:22.690 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.690 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.690 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.690 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:22.690 "subsystems": [ 00:15:22.690 { 00:15:22.690 "subsystem": "keyring", 00:15:22.690 "config": [ 00:15:22.690 { 00:15:22.690 "method": "keyring_file_add_key", 00:15:22.690 "params": { 00:15:22.690 "name": "key0", 00:15:22.690 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:22.690 } 00:15:22.690 } 00:15:22.690 ] 00:15:22.690 }, 00:15:22.691 { 00:15:22.691 "subsystem": "iobuf", 00:15:22.691 "config": [ 00:15:22.691 { 00:15:22.691 "method": "iobuf_set_options", 00:15:22.691 "params": { 00:15:22.691 "small_pool_count": 8192, 00:15:22.691 "large_pool_count": 1024, 00:15:22.691 "small_bufsize": 8192, 00:15:22.691 "large_bufsize": 135168, 00:15:22.691 "enable_numa": false 00:15:22.691 } 00:15:22.691 } 00:15:22.691 ] 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "subsystem": "sock", 00:15:22.691 "config": [ 00:15:22.691 { 00:15:22.691 "method": "sock_set_default_impl", 00:15:22.691 "params": { 00:15:22.691 "impl_name": "uring" 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "sock_impl_set_options", 00:15:22.691 "params": { 00:15:22.691 "impl_name": "ssl", 00:15:22.691 "recv_buf_size": 4096, 00:15:22.691 "send_buf_size": 4096, 00:15:22.691 "enable_recv_pipe": true, 00:15:22.691 "enable_quickack": false, 00:15:22.691 "enable_placement_id": 0, 00:15:22.691 "enable_zerocopy_send_server": true, 00:15:22.691 "enable_zerocopy_send_client": false, 00:15:22.691 "zerocopy_threshold": 0, 00:15:22.691 "tls_version": 0, 00:15:22.691 "enable_ktls": false 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "sock_impl_set_options", 00:15:22.691 "params": { 00:15:22.691 "impl_name": "posix", 00:15:22.691 "recv_buf_size": 2097152, 00:15:22.691 "send_buf_size": 2097152, 00:15:22.691 "enable_recv_pipe": true, 00:15:22.691 "enable_quickack": false, 00:15:22.691 "enable_placement_id": 0, 00:15:22.691 "enable_zerocopy_send_server": true, 00:15:22.691 "enable_zerocopy_send_client": false, 00:15:22.691 "zerocopy_threshold": 0, 00:15:22.691 "tls_version": 0, 00:15:22.691 "enable_ktls": false 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "sock_impl_set_options", 00:15:22.691 "params": { 00:15:22.691 "impl_name": "uring", 00:15:22.691 "recv_buf_size": 2097152, 00:15:22.691 "send_buf_size": 2097152, 00:15:22.691 "enable_recv_pipe": true, 00:15:22.691 "enable_quickack": false, 00:15:22.691 "enable_placement_id": 0, 00:15:22.691 "enable_zerocopy_send_server": false, 00:15:22.691 "enable_zerocopy_send_client": false, 00:15:22.691 "zerocopy_threshold": 0, 00:15:22.691 "tls_version": 0, 00:15:22.691 "enable_ktls": false 00:15:22.691 } 00:15:22.691 } 00:15:22.691 ] 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "subsystem": "vmd", 00:15:22.691 "config": [] 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "subsystem": "accel", 00:15:22.691 "config": [ 00:15:22.691 { 00:15:22.691 "method": "accel_set_options", 00:15:22.691 "params": { 00:15:22.691 "small_cache_size": 128, 00:15:22.691 "large_cache_size": 16, 00:15:22.691 "task_count": 2048, 00:15:22.691 "sequence_count": 2048, 00:15:22.691 "buf_count": 2048 00:15:22.691 } 00:15:22.691 } 00:15:22.691 ] 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "subsystem": "bdev", 00:15:22.691 "config": [ 00:15:22.691 { 00:15:22.691 "method": "bdev_set_options", 00:15:22.691 "params": { 00:15:22.691 "bdev_io_pool_size": 65535, 00:15:22.691 "bdev_io_cache_size": 256, 00:15:22.691 "bdev_auto_examine": true, 00:15:22.691 "iobuf_small_cache_size": 128, 00:15:22.691 "iobuf_large_cache_size": 16 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_raid_set_options", 00:15:22.691 "params": { 00:15:22.691 "process_window_size_kb": 1024, 00:15:22.691 "process_max_bandwidth_mb_sec": 0 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_iscsi_set_options", 00:15:22.691 "params": { 00:15:22.691 "timeout_sec": 30 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_nvme_set_options", 00:15:22.691 "params": { 00:15:22.691 "action_on_timeout": "none", 00:15:22.691 "timeout_us": 0, 00:15:22.691 "timeout_admin_us": 0, 00:15:22.691 "keep_alive_timeout_ms": 10000, 00:15:22.691 "arbitration_burst": 0, 00:15:22.691 "low_priority_weight": 0, 00:15:22.691 "medium_priority_weight": 0, 00:15:22.691 "high_priority_weight": 0, 00:15:22.691 "nvme_adminq_poll_period_us": 10000, 00:15:22.691 "nvme_ioq_poll_period_us": 0, 00:15:22.691 "io_queue_requests": 0, 00:15:22.691 "delay_cmd_submit": true, 00:15:22.691 "transport_retry_count": 4, 00:15:22.691 "bdev_retry_count": 3, 00:15:22.691 "transport_ack_timeout": 0, 00:15:22.691 "ctrlr_loss_timeout_sec": 0, 00:15:22.691 "reconnect_delay_sec": 0, 00:15:22.691 "fast_io_fail_timeout_sec": 0, 00:15:22.691 "disable_auto_failback": false, 00:15:22.691 "generate_uuids": false, 00:15:22.691 "transport_tos": 0, 00:15:22.691 "nvme_error_stat": false, 00:15:22.691 "rdma_srq_size": 0, 00:15:22.691 "io_path_stat": false, 00:15:22.691 "allow_accel_sequence": false, 00:15:22.691 "rdma_max_cq_size": 0, 00:15:22.691 "rdma_cm_event_timeout_ms": 0, 00:15:22.691 "dhchap_digests": [ 00:15:22.691 "sha256", 00:15:22.691 "sha384", 00:15:22.691 "sha512" 00:15:22.691 ], 00:15:22.691 "dhchap_dhgroups": [ 00:15:22.691 "null", 00:15:22.691 "ffdhe2048", 00:15:22.691 "ffdhe3072", 00:15:22.691 "ffdhe4096", 00:15:22.691 "ffdhe6144", 00:15:22.691 "ffdhe8192" 00:15:22.691 ] 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_nvme_set_hotplug", 00:15:22.691 "params": { 00:15:22.691 "period_us": 100000, 00:15:22.691 "enable": false 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_malloc_create", 00:15:22.691 "params": { 00:15:22.691 "name": "malloc0", 00:15:22.691 "num_blocks": 8192, 00:15:22.691 "block_size": 4096, 00:15:22.691 "physical_block_size": 4096, 00:15:22.691 "uuid": "23b8c7b3-496a-4ebe-a748-50e82edb08dd", 00:15:22.691 "optimal_io_boundary": 0, 00:15:22.691 "md_size": 0, 00:15:22.691 "dif_type": 0, 00:15:22.691 "dif_is_head_of_md": false, 00:15:22.691 "dif_pi_format": 0 00:15:22.691 } 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "method": "bdev_wait_for_examine" 00:15:22.691 } 00:15:22.691 ] 00:15:22.691 }, 00:15:22.692 { 00:15:22.692 "subsystem": "nbd", 00:15:22.692 "config": [] 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "subsystem": "scheduler", 00:15:22.692 "config": [ 00:15:22.692 { 00:15:22.692 "method": "framework_set_scheduler", 00:15:22.692 "params": { 00:15:22.692 "name": "static" 00:15:22.692 } 00:15:22.692 } 00:15:22.692 ] 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "subsystem": "nvmf", 00:15:22.692 "config": [ 00:15:22.692 { 00:15:22.692 "method": "nvmf_set_config", 00:15:22.692 "params": { 00:15:22.692 "discovery_filter": "match_any", 00:15:22.692 "admin_cmd_passthru": { 00:15:22.692 "identify_ctrlr": false 00:15:22.692 }, 00:15:22.692 "dhchap_digests": [ 00:15:22.692 "sha256", 00:15:22.692 "sha384", 00:15:22.692 "sha512" 00:15:22.692 ], 00:15:22.692 "dhchap_dhgroups": [ 00:15:22.692 "null", 00:15:22.692 "ffdhe2048", 00:15:22.692 "ffdhe3072", 00:15:22.692 "ffdhe4096", 00:15:22.692 "ffdhe6144", 00:15:22.692 "ffdhe8192" 00:15:22.692 ] 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_set_max_subsystems", 00:15:22.692 "params": { 00:15:22.692 "max_subsystems": 1024 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_set_crdt", 00:15:22.692 "params": { 00:15:22.692 "crdt1": 0, 00:15:22.692 "crdt2": 0, 00:15:22.692 "crdt3": 0 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_create_transport", 00:15:22.692 "params": { 00:15:22.692 "trtype": "TCP", 00:15:22.692 "max_queue_depth": 128, 00:15:22.692 "max_io_qpairs_per_ctrlr": 127, 00:15:22.692 "in_capsule_data_size": 4096, 00:15:22.692 "max_io_size": 131072, 00:15:22.692 "io_unit_size": 131072, 00:15:22.692 "max_aq_depth": 128, 00:15:22.692 "num_shared_buffers": 511, 00:15:22.692 "buf_cache_size": 4294967295, 00:15:22.692 "dif_insert_or_strip": false, 00:15:22.692 "zcopy": false, 00:15:22.692 "c2h_success": false, 00:15:22.692 "sock_priority": 0, 00:15:22.692 "abort_timeout_sec": 1, 00:15:22.692 "ack_timeout": 0, 00:15:22.692 "data_wr_pool_size": 0 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_create_subsystem", 00:15:22.692 "params": { 00:15:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.692 "allow_any_host": false, 00:15:22.692 "serial_number": "00000000000000000000", 00:15:22.692 "model_number": "SPDK bdev Controller", 00:15:22.692 "max_namespaces": 32, 00:15:22.692 "min_cntlid": 1, 00:15:22.692 "max_cntlid": 65519, 00:15:22.692 "ana_reporting": false 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_subsystem_add_host", 00:15:22.692 "params": { 00:15:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.692 "host": "nqn.2016-06.io.spdk:host1", 00:15:22.692 "psk": "key0" 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_subsystem_add_ns", 00:15:22.692 "params": { 00:15:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.692 "namespace": { 00:15:22.692 "nsid": 1, 00:15:22.692 "bdev_name": "malloc0", 00:15:22.692 "nguid": "23B8C7B3496A4EBEA74850E82EDB08DD", 00:15:22.692 "uuid": "23b8c7b3-496a-4ebe-a748-50e82edb08dd", 00:15:22.692 "no_auto_visible": false 00:15:22.692 } 00:15:22.692 } 00:15:22.692 }, 00:15:22.692 { 00:15:22.692 "method": "nvmf_subsystem_add_listener", 00:15:22.692 "params": { 00:15:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.692 "listen_address": { 00:15:22.692 "trtype": "TCP", 00:15:22.692 "adrfam": "IPv4", 00:15:22.692 "traddr": "10.0.0.3", 00:15:22.692 "trsvcid": "4420" 00:15:22.692 }, 00:15:22.692 "secure_channel": false, 00:15:22.692 "sock_impl": "ssl" 00:15:22.692 } 00:15:22.692 } 00:15:22.692 ] 00:15:22.692 } 00:15:22.692 ] 00:15:22.692 }' 00:15:22.692 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:22.951 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:22.951 "subsystems": [ 00:15:22.951 { 00:15:22.951 "subsystem": "keyring", 00:15:22.951 "config": [ 00:15:22.951 { 00:15:22.951 "method": "keyring_file_add_key", 00:15:22.951 "params": { 00:15:22.951 "name": "key0", 00:15:22.951 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:22.951 } 00:15:22.951 } 00:15:22.951 ] 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "subsystem": "iobuf", 00:15:22.951 "config": [ 00:15:22.951 { 00:15:22.951 "method": "iobuf_set_options", 00:15:22.951 "params": { 00:15:22.951 "small_pool_count": 8192, 00:15:22.951 "large_pool_count": 1024, 00:15:22.951 "small_bufsize": 8192, 00:15:22.951 "large_bufsize": 135168, 00:15:22.951 "enable_numa": false 00:15:22.951 } 00:15:22.951 } 00:15:22.951 ] 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "subsystem": "sock", 00:15:22.951 "config": [ 00:15:22.951 { 00:15:22.951 "method": "sock_set_default_impl", 00:15:22.951 "params": { 00:15:22.951 "impl_name": "uring" 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "sock_impl_set_options", 00:15:22.951 "params": { 00:15:22.951 "impl_name": "ssl", 00:15:22.951 "recv_buf_size": 4096, 00:15:22.951 "send_buf_size": 4096, 00:15:22.951 "enable_recv_pipe": true, 00:15:22.951 "enable_quickack": false, 00:15:22.951 "enable_placement_id": 0, 00:15:22.951 "enable_zerocopy_send_server": true, 00:15:22.951 "enable_zerocopy_send_client": false, 00:15:22.951 "zerocopy_threshold": 0, 00:15:22.951 "tls_version": 0, 00:15:22.951 "enable_ktls": false 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "sock_impl_set_options", 00:15:22.951 "params": { 00:15:22.951 "impl_name": "posix", 00:15:22.951 "recv_buf_size": 2097152, 00:15:22.951 "send_buf_size": 2097152, 00:15:22.951 "enable_recv_pipe": true, 00:15:22.951 "enable_quickack": false, 00:15:22.951 "enable_placement_id": 0, 00:15:22.951 "enable_zerocopy_send_server": true, 00:15:22.951 "enable_zerocopy_send_client": false, 00:15:22.951 "zerocopy_threshold": 0, 00:15:22.951 "tls_version": 0, 00:15:22.951 "enable_ktls": false 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "sock_impl_set_options", 00:15:22.951 "params": { 00:15:22.951 "impl_name": "uring", 00:15:22.951 "recv_buf_size": 2097152, 00:15:22.951 "send_buf_size": 2097152, 00:15:22.951 "enable_recv_pipe": true, 00:15:22.951 "enable_quickack": false, 00:15:22.951 "enable_placement_id": 0, 00:15:22.951 "enable_zerocopy_send_server": false, 00:15:22.951 "enable_zerocopy_send_client": false, 00:15:22.951 "zerocopy_threshold": 0, 00:15:22.951 "tls_version": 0, 00:15:22.951 "enable_ktls": false 00:15:22.951 } 00:15:22.951 } 00:15:22.951 ] 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "subsystem": "vmd", 00:15:22.951 "config": [] 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "subsystem": "accel", 00:15:22.951 "config": [ 00:15:22.951 { 00:15:22.951 "method": "accel_set_options", 00:15:22.951 "params": { 00:15:22.951 "small_cache_size": 128, 00:15:22.951 "large_cache_size": 16, 00:15:22.951 "task_count": 2048, 00:15:22.951 "sequence_count": 2048, 00:15:22.951 "buf_count": 2048 00:15:22.951 } 00:15:22.951 } 00:15:22.951 ] 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "subsystem": "bdev", 00:15:22.951 "config": [ 00:15:22.951 { 00:15:22.951 "method": "bdev_set_options", 00:15:22.951 "params": { 00:15:22.951 "bdev_io_pool_size": 65535, 00:15:22.951 "bdev_io_cache_size": 256, 00:15:22.951 "bdev_auto_examine": true, 00:15:22.951 "iobuf_small_cache_size": 128, 00:15:22.951 "iobuf_large_cache_size": 16 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "bdev_raid_set_options", 00:15:22.951 "params": { 00:15:22.951 "process_window_size_kb": 1024, 00:15:22.951 "process_max_bandwidth_mb_sec": 0 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "bdev_iscsi_set_options", 00:15:22.951 "params": { 00:15:22.951 "timeout_sec": 30 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "bdev_nvme_set_options", 00:15:22.951 "params": { 00:15:22.951 "action_on_timeout": "none", 00:15:22.951 "timeout_us": 0, 00:15:22.951 "timeout_admin_us": 0, 00:15:22.951 "keep_alive_timeout_ms": 10000, 00:15:22.951 "arbitration_burst": 0, 00:15:22.951 "low_priority_weight": 0, 00:15:22.951 "medium_priority_weight": 0, 00:15:22.951 "high_priority_weight": 0, 00:15:22.951 "nvme_adminq_poll_period_us": 10000, 00:15:22.951 "nvme_ioq_poll_period_us": 0, 00:15:22.951 "io_queue_requests": 512, 00:15:22.951 "delay_cmd_submit": true, 00:15:22.951 "transport_retry_count": 4, 00:15:22.951 "bdev_retry_count": 3, 00:15:22.951 "transport_ack_timeout": 0, 00:15:22.951 "ctrlr_loss_timeout_sec": 0, 00:15:22.951 "reconnect_delay_sec": 0, 00:15:22.951 "fast_io_fail_timeout_sec": 0, 00:15:22.951 "disable_auto_failback": false, 00:15:22.951 "generate_uuids": false, 00:15:22.951 "transport_tos": 0, 00:15:22.951 "nvme_error_stat": false, 00:15:22.951 "rdma_srq_size": 0, 00:15:22.951 "io_path_stat": false, 00:15:22.951 "allow_accel_sequence": false, 00:15:22.951 "rdma_max_cq_size": 0, 00:15:22.951 "rdma_cm_event_timeout_ms": 0, 00:15:22.951 "dhchap_digests": [ 00:15:22.951 "sha256", 00:15:22.951 "sha384", 00:15:22.951 "sha512" 00:15:22.951 ], 00:15:22.951 "dhchap_dhgroups": [ 00:15:22.951 "null", 00:15:22.951 "ffdhe2048", 00:15:22.951 "ffdhe3072", 00:15:22.951 "ffdhe4096", 00:15:22.951 "ffdhe6144", 00:15:22.951 "ffdhe8192" 00:15:22.951 ] 00:15:22.951 } 00:15:22.951 }, 00:15:22.951 { 00:15:22.951 "method": "bdev_nvme_attach_controller", 00:15:22.951 "params": { 00:15:22.951 "name": "nvme0", 00:15:22.952 "trtype": "TCP", 00:15:22.952 "adrfam": "IPv4", 00:15:22.952 "traddr": "10.0.0.3", 00:15:22.952 "trsvcid": "4420", 00:15:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.952 "prchk_reftag": false, 00:15:22.952 "prchk_guard": false, 00:15:22.952 "ctrlr_loss_timeout_sec": 0, 00:15:22.952 "reconnect_delay_sec": 0, 00:15:22.952 "fast_io_fail_timeout_sec": 0, 00:15:22.952 "psk": "key0", 00:15:22.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.952 "hdgst": false, 00:15:22.952 "ddgst": false, 00:15:22.952 "multipath": "multipath" 00:15:22.952 } 00:15:22.952 }, 00:15:22.952 { 00:15:22.952 "method": "bdev_nvme_set_hotplug", 00:15:22.952 "params": { 00:15:22.952 "period_us": 100000, 00:15:22.952 "enable": false 00:15:22.952 } 00:15:22.952 }, 00:15:22.952 { 00:15:22.952 "method": "bdev_enable_histogram", 00:15:22.952 "params": { 00:15:22.952 "name": "nvme0n1", 00:15:22.952 "enable": true 00:15:22.952 } 00:15:22.952 }, 00:15:22.952 { 00:15:22.952 "method": "bdev_wait_for_examine" 00:15:22.952 } 00:15:22.952 ] 00:15:22.952 }, 00:15:22.952 { 00:15:22.952 "subsystem": "nbd", 00:15:22.952 "config": [] 00:15:22.952 } 00:15:22.952 ] 00:15:22.952 }' 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72499 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72499 ']' 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72499 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.952 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72499 00:15:23.210 killing process with pid 72499 00:15:23.210 Received shutdown signal, test time was about 1.000000 seconds 00:15:23.210 00:15:23.210 Latency(us) 00:15:23.210 [2024-11-29T13:01:54.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.210 [2024-11-29T13:01:54.725Z] =================================================================================================================== 00:15:23.210 [2024-11-29T13:01:54.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.210 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.210 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.210 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72499' 00:15:23.210 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72499 00:15:23.210 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72499 00:15:23.469 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72474 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72474 ']' 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72474 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72474 00:15:23.470 killing process with pid 72474 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72474' 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72474 00:15:23.470 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72474 00:15:23.728 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:23.728 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.728 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:23.728 "subsystems": [ 00:15:23.728 { 00:15:23.728 "subsystem": "keyring", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "keyring_file_add_key", 00:15:23.728 "params": { 00:15:23.728 "name": "key0", 00:15:23.728 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:23.728 } 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "iobuf", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "iobuf_set_options", 00:15:23.728 "params": { 00:15:23.728 "small_pool_count": 8192, 00:15:23.728 "large_pool_count": 1024, 00:15:23.728 "small_bufsize": 8192, 00:15:23.728 "large_bufsize": 135168, 00:15:23.728 "enable_numa": false 00:15:23.728 } 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "sock", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "sock_set_default_impl", 00:15:23.728 "params": { 00:15:23.728 "impl_name": "uring" 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "sock_impl_set_options", 00:15:23.728 "params": { 00:15:23.728 "impl_name": "ssl", 00:15:23.728 "recv_buf_size": 4096, 00:15:23.728 "send_buf_size": 4096, 00:15:23.728 "enable_recv_pipe": true, 00:15:23.728 "enable_quickack": false, 00:15:23.728 "enable_placement_id": 0, 00:15:23.728 "enable_zerocopy_send_server": true, 00:15:23.728 "enable_zerocopy_send_client": false, 00:15:23.728 "zerocopy_threshold": 0, 00:15:23.728 "tls_version": 0, 00:15:23.728 "enable_ktls": false 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "sock_impl_set_options", 00:15:23.728 "params": { 00:15:23.728 "impl_name": "posix", 00:15:23.728 "recv_buf_size": 2097152, 00:15:23.728 "send_buf_size": 2097152, 00:15:23.728 "enable_recv_pipe": true, 00:15:23.728 "enable_quickack": false, 00:15:23.728 "enable_placement_id": 0, 00:15:23.728 "enable_zerocopy_send_server": true, 00:15:23.728 "enable_zerocopy_send_client": false, 00:15:23.728 "zerocopy_threshold": 0, 00:15:23.728 "tls_version": 0, 00:15:23.728 "enable_ktls": false 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "sock_impl_set_options", 00:15:23.728 "params": { 00:15:23.728 "impl_name": "uring", 00:15:23.728 "recv_buf_size": 2097152, 00:15:23.728 "send_buf_size": 2097152, 00:15:23.728 "enable_recv_pipe": true, 00:15:23.728 "enable_quickack": false, 00:15:23.728 "enable_placement_id": 0, 00:15:23.728 "enable_zerocopy_send_server": false, 00:15:23.728 "enable_zerocopy_send_client": false, 00:15:23.728 "zerocopy_threshold": 0, 00:15:23.728 "tls_version": 0, 00:15:23.728 "enable_ktls": false 00:15:23.728 } 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "vmd", 00:15:23.728 "config": [] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "accel", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "accel_set_options", 00:15:23.728 "params": { 00:15:23.728 "small_cache_size": 128, 00:15:23.728 "large_cache_size": 16, 00:15:23.728 "task_count": 2048, 00:15:23.728 "sequence_count": 2048, 00:15:23.728 "buf_count": 2048 00:15:23.728 } 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "bdev", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "bdev_set_options", 00:15:23.728 "params": { 00:15:23.728 "bdev_io_pool_size": 65535, 00:15:23.728 "bdev_io_cache_size": 256, 00:15:23.728 "bdev_auto_examine": true, 00:15:23.728 "iobuf_small_cache_size": 128, 00:15:23.728 "iobuf_large_cache_size": 16 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_raid_set_options", 00:15:23.728 "params": { 00:15:23.728 "process_window_size_kb": 1024, 00:15:23.728 "process_max_bandwidth_mb_sec": 0 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_iscsi_set_options", 00:15:23.728 "params": { 00:15:23.728 "timeout_sec": 30 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_nvme_set_options", 00:15:23.728 "params": { 00:15:23.728 "action_on_timeout": "none", 00:15:23.728 "timeout_us": 0, 00:15:23.728 "timeout_admin_us": 0, 00:15:23.728 "keep_alive_timeout_ms": 10000, 00:15:23.728 "arbitration_burst": 0, 00:15:23.728 "low_priority_weight": 0, 00:15:23.728 "medium_priority_weight": 0, 00:15:23.728 "high_priority_weight": 0, 00:15:23.728 "nvme_adminq_poll_period_us": 10000, 00:15:23.728 "nvme_ioq_poll_period_us": 0, 00:15:23.728 "io_queue_requests": 0, 00:15:23.728 "delay_cmd_submit": true, 00:15:23.728 "transport_retry_count": 4, 00:15:23.728 "bdev_retry_count": 3, 00:15:23.728 "transport_ack_timeout": 0, 00:15:23.728 "ctrlr_loss_timeout_sec": 0, 00:15:23.728 "reconnect_delay_sec": 0, 00:15:23.728 "fast_io_fail_timeout_sec": 0, 00:15:23.728 "disable_auto_failback": false, 00:15:23.728 "generate_uuids": false, 00:15:23.728 "transport_tos": 0, 00:15:23.728 "nvme_error_stat": false, 00:15:23.728 "rdma_srq_size": 0, 00:15:23.728 "io_path_stat": false, 00:15:23.728 "allow_accel_sequence": false, 00:15:23.728 "rdma_max_cq_size": 0, 00:15:23.728 "rdma_cm_event_timeout_ms": 0, 00:15:23.728 "dhchap_digests": [ 00:15:23.728 "sha256", 00:15:23.728 "sha384", 00:15:23.728 "sha512" 00:15:23.728 ], 00:15:23.728 "dhchap_dhgroups": [ 00:15:23.728 "null", 00:15:23.728 "ffdhe2048", 00:15:23.728 "ffdhe3072", 00:15:23.728 "ffdhe4096", 00:15:23.728 "ffdhe6144", 00:15:23.728 "ffdhe8192" 00:15:23.728 ] 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_nvme_set_hotplug", 00:15:23.728 "params": { 00:15:23.728 "period_us": 100000, 00:15:23.728 "enable": false 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_malloc_create", 00:15:23.728 "params": { 00:15:23.728 "name": "malloc0", 00:15:23.728 "num_blocks": 8192, 00:15:23.728 "block_size": 4096, 00:15:23.728 "physical_block_size": 4096, 00:15:23.728 "uuid": "23b8c7b3-496a-4ebe-a748-50e82edb08dd", 00:15:23.728 "optimal_io_boundary": 0, 00:15:23.728 "md_size": 0, 00:15:23.728 "dif_type": 0, 00:15:23.728 "dif_is_head_of_md": false, 00:15:23.728 "dif_pi_format": 0 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "bdev_wait_for_examine" 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "nbd", 00:15:23.728 "config": [] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "scheduler", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "framework_set_scheduler", 00:15:23.728 "params": { 00:15:23.728 "name": "static" 00:15:23.728 } 00:15:23.728 } 00:15:23.728 ] 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "subsystem": "nvmf", 00:15:23.728 "config": [ 00:15:23.728 { 00:15:23.728 "method": "nvmf_set_config", 00:15:23.728 "params": { 00:15:23.728 "discovery_filter": "match_any", 00:15:23.728 "admin_cmd_passthru": { 00:15:23.728 "identify_ctrlr": false 00:15:23.728 }, 00:15:23.728 "dhchap_digests": [ 00:15:23.728 "sha256", 00:15:23.728 "sha384", 00:15:23.728 "sha512" 00:15:23.728 ], 00:15:23.728 "dhchap_dhgroups": [ 00:15:23.728 "null", 00:15:23.728 "ffdhe2048", 00:15:23.728 "ffdhe3072", 00:15:23.728 "ffdhe4096", 00:15:23.728 "ffdhe6144", 00:15:23.728 "ffdhe8192" 00:15:23.728 ] 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_set_max_subsystems", 00:15:23.728 "params": { 00:15:23.728 "max_subsystems": 1024 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_set_crdt", 00:15:23.728 "params": { 00:15:23.728 "crdt1": 0, 00:15:23.728 "crdt2": 0, 00:15:23.728 "crdt3": 0 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_create_transport", 00:15:23.728 "params": { 00:15:23.728 "trtype": "TCP", 00:15:23.728 "max_queue_depth": 128, 00:15:23.728 "max_io_qpairs_per_ctrlr": 127, 00:15:23.728 "in_capsule_data_size": 4096, 00:15:23.728 "max_io_size": 131072, 00:15:23.728 "io_unit_size": 131072, 00:15:23.728 "max_aq_depth": 128, 00:15:23.728 "num_shared_buffers": 511, 00:15:23.728 "buf_cache_size": 4294967295, 00:15:23.728 "dif_insert_or_strip": false, 00:15:23.728 "zcopy": false, 00:15:23.728 "c2h_success": false, 00:15:23.728 "sock_priority": 0, 00:15:23.728 "abort_timeout_sec": 1, 00:15:23.728 "ack_timeout": 0, 00:15:23.728 "data_wr_pool_size": 0 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_create_subsystem", 00:15:23.728 "params": { 00:15:23.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.728 "allow_any_host": false, 00:15:23.728 "serial_number": "00000000000000000000", 00:15:23.728 "model_number": "SPDK bdev Controller", 00:15:23.728 "max_namespaces": 32, 00:15:23.728 "min_cntlid": 1, 00:15:23.728 "max_cntlid": 65519, 00:15:23.728 "ana_reporting": false 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_subsystem_add_host", 00:15:23.728 "params": { 00:15:23.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.728 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.728 "psk": "key0" 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_subsystem_add_ns", 00:15:23.728 "params": { 00:15:23.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.728 "namespace": { 00:15:23.728 "nsid": 1, 00:15:23.728 "bdev_name": "malloc0", 00:15:23.728 "nguid": "23B8C7B3496A4EBEA74850E82EDB08DD", 00:15:23.728 "uuid": "23b8c7b3-496a-4ebe-a748-50e82edb08dd", 00:15:23.728 "no_auto_visible": false 00:15:23.728 } 00:15:23.728 } 00:15:23.728 }, 00:15:23.728 { 00:15:23.728 "method": "nvmf_subsystem_add_listener", 00:15:23.728 "params": { 00:15:23.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.728 "listen_address": { 00:15:23.728 "trtype": "TCP", 00:15:23.728 "adrfam": "IPv4", 00:15:23.728 "traddr": "10.0.0.3", 00:15:23.728 "trsvcid": "4420" 00:15:23.729 }, 00:15:23.729 "secure_channel": false, 00:15:23.729 "sock_impl": "ssl" 00:15:23.729 } 00:15:23.729 } 00:15:23.729 ] 00:15:23.729 } 00:15:23.729 ] 00:15:23.729 }' 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72552 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72552 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72552 ']' 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.729 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.729 [2024-11-29 13:01:55.121331] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:23.729 [2024-11-29 13:01:55.121842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.987 [2024-11-29 13:01:55.268333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.987 [2024-11-29 13:01:55.333248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.987 [2024-11-29 13:01:55.333598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.987 [2024-11-29 13:01:55.333628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.987 [2024-11-29 13:01:55.333637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.987 [2024-11-29 13:01:55.333646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.987 [2024-11-29 13:01:55.334198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.245 [2024-11-29 13:01:55.519778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.245 [2024-11-29 13:01:55.618027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.245 [2024-11-29 13:01:55.649969] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:24.245 [2024-11-29 13:01:55.650254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72584 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72584 /var/tmp/bdevperf.sock 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72584 ']' 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.812 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:24.812 "subsystems": [ 00:15:24.812 { 00:15:24.812 "subsystem": "keyring", 00:15:24.812 "config": [ 00:15:24.812 { 00:15:24.812 "method": "keyring_file_add_key", 00:15:24.812 "params": { 00:15:24.812 "name": "key0", 00:15:24.812 "path": "/tmp/tmp.nIbxcjPvpI" 00:15:24.812 } 00:15:24.812 } 00:15:24.812 ] 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "subsystem": "iobuf", 00:15:24.812 "config": [ 00:15:24.812 { 00:15:24.812 "method": "iobuf_set_options", 00:15:24.812 "params": { 00:15:24.812 "small_pool_count": 8192, 00:15:24.812 "large_pool_count": 1024, 00:15:24.812 "small_bufsize": 8192, 00:15:24.812 "large_bufsize": 135168, 00:15:24.812 "enable_numa": false 00:15:24.812 } 00:15:24.812 } 00:15:24.812 ] 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "subsystem": "sock", 00:15:24.812 "config": [ 00:15:24.812 { 00:15:24.812 "method": "sock_set_default_impl", 00:15:24.812 "params": { 00:15:24.812 "impl_name": "uring" 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "sock_impl_set_options", 00:15:24.812 "params": { 00:15:24.812 "impl_name": "ssl", 00:15:24.812 "recv_buf_size": 4096, 00:15:24.812 "send_buf_size": 4096, 00:15:24.812 "enable_recv_pipe": true, 00:15:24.812 "enable_quickack": false, 00:15:24.812 "enable_placement_id": 0, 00:15:24.812 "enable_zerocopy_send_server": true, 00:15:24.812 "enable_zerocopy_send_client": false, 00:15:24.812 "zerocopy_threshold": 0, 00:15:24.812 "tls_version": 0, 00:15:24.812 "enable_ktls": false 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "sock_impl_set_options", 00:15:24.812 "params": { 00:15:24.812 "impl_name": "posix", 00:15:24.812 "recv_buf_size": 2097152, 00:15:24.812 "send_buf_size": 2097152, 00:15:24.812 "enable_recv_pipe": true, 00:15:24.812 "enable_quickack": false, 00:15:24.812 "enable_placement_id": 0, 00:15:24.812 "enable_zerocopy_send_server": true, 00:15:24.812 "enable_zerocopy_send_client": false, 00:15:24.812 "zerocopy_threshold": 0, 00:15:24.812 "tls_version": 0, 00:15:24.812 "enable_ktls": false 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "sock_impl_set_options", 00:15:24.812 "params": { 00:15:24.812 "impl_name": "uring", 00:15:24.812 "recv_buf_size": 2097152, 00:15:24.812 "send_buf_size": 2097152, 00:15:24.812 "enable_recv_pipe": true, 00:15:24.812 "enable_quickack": false, 00:15:24.812 "enable_placement_id": 0, 00:15:24.812 "enable_zerocopy_send_server": false, 00:15:24.812 "enable_zerocopy_send_client": false, 00:15:24.812 "zerocopy_threshold": 0, 00:15:24.812 "tls_version": 0, 00:15:24.812 "enable_ktls": false 00:15:24.812 } 00:15:24.812 } 00:15:24.812 ] 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "subsystem": "vmd", 00:15:24.812 "config": [] 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "subsystem": "accel", 00:15:24.812 "config": [ 00:15:24.812 { 00:15:24.812 "method": "accel_set_options", 00:15:24.812 "params": { 00:15:24.812 "small_cache_size": 128, 00:15:24.812 "large_cache_size": 16, 00:15:24.812 "task_count": 2048, 00:15:24.812 "sequence_count": 2048, 00:15:24.812 "buf_count": 2048 00:15:24.812 } 00:15:24.812 } 00:15:24.812 ] 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "subsystem": "bdev", 00:15:24.812 "config": [ 00:15:24.812 { 00:15:24.812 "method": "bdev_set_options", 00:15:24.812 "params": { 00:15:24.812 "bdev_io_pool_size": 65535, 00:15:24.812 "bdev_io_cache_size": 256, 00:15:24.812 "bdev_auto_examine": true, 00:15:24.812 "iobuf_small_cache_size": 128, 00:15:24.812 "iobuf_large_cache_size": 16 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "bdev_raid_set_options", 00:15:24.812 "params": { 00:15:24.812 "process_window_size_kb": 1024, 00:15:24.812 "process_max_bandwidth_mb_sec": 0 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "bdev_iscsi_set_options", 00:15:24.812 "params": { 00:15:24.812 "timeout_sec": 30 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "bdev_nvme_set_options", 00:15:24.812 "params": { 00:15:24.812 "action_on_timeout": "none", 00:15:24.812 "timeout_us": 0, 00:15:24.812 "timeout_admin_us": 0, 00:15:24.812 "keep_alive_timeout_ms": 10000, 00:15:24.812 "arbitration_burst": 0, 00:15:24.812 "low_priority_weight": 0, 00:15:24.812 "medium_priority_weight": 0, 00:15:24.812 "high_priority_weight": 0, 00:15:24.812 "nvme_adminq_poll_period_us": 10000, 00:15:24.812 "nvme_ioq_poll_period_us": 0, 00:15:24.812 "io_queue_requests": 512, 00:15:24.812 "delay_cmd_submit": true, 00:15:24.812 "transport_retry_count": 4, 00:15:24.812 "bdev_retry_count": 3, 00:15:24.812 "transport_ack_timeout": 0, 00:15:24.812 "ctrlr_loss_timeout_sec": 0, 00:15:24.812 "reconnect_delay_sec": 0, 00:15:24.812 "fast_io_fail_timeout_sec": 0, 00:15:24.812 "disable_auto_failback": false, 00:15:24.812 "generate_uuids": false, 00:15:24.812 "transport_tos": 0, 00:15:24.812 "nvme_error_stat": false, 00:15:24.812 "rdma_srq_size": 0, 00:15:24.812 "io_path_stat": false, 00:15:24.812 "allow_accel_sequence": false, 00:15:24.812 "rdma_max_cq_size": 0, 00:15:24.812 "rdma_cm_event_timeout_ms": 0, 00:15:24.812 "dhchap_digests": [ 00:15:24.812 "sha256", 00:15:24.812 "sha384", 00:15:24.812 "sha512" 00:15:24.812 ], 00:15:24.812 "dhchap_dhgroups": [ 00:15:24.812 "null", 00:15:24.812 "ffdhe2048", 00:15:24.812 "ffdhe3072", 00:15:24.812 "ffdhe4096", 00:15:24.812 "ffdhe6144", 00:15:24.812 "ffdhe8192" 00:15:24.812 ] 00:15:24.812 } 00:15:24.812 }, 00:15:24.812 { 00:15:24.812 "method": "bdev_nvme_attach_controller", 00:15:24.812 "params": { 00:15:24.812 "name": "nvme0", 00:15:24.812 "trtype": "TCP", 00:15:24.812 "adrfam": "IPv4", 00:15:24.812 "traddr": "10.0.0.3", 00:15:24.812 "trsvcid": "4420", 00:15:24.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.812 "prchk_reftag": false, 00:15:24.812 "prchk_guard": false, 00:15:24.812 "ctrlr_loss_timeout_sec": 0, 00:15:24.812 "reconnect_delay_sec": 0, 00:15:24.812 "fast_io_fail_timeout_sec": 0, 00:15:24.812 "psk": "key0", 00:15:24.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.813 "hdgst": false, 00:15:24.813 "ddgst": false, 00:15:24.813 "multipath": "multipath" 00:15:24.813 } 00:15:24.813 }, 00:15:24.813 { 00:15:24.813 "method": "bdev_nvme_set_hotplug", 00:15:24.813 "params": { 00:15:24.813 "period_us": 100000, 00:15:24.813 "enable": false 00:15:24.813 } 00:15:24.813 }, 00:15:24.813 { 00:15:24.813 "method": "bdev_enable_histogram", 00:15:24.813 "params": { 00:15:24.813 "name": "nvme0n1", 00:15:24.813 "enable": true 00:15:24.813 } 00:15:24.813 }, 00:15:24.813 { 00:15:24.813 "method": "bdev_wait_for_examine" 00:15:24.813 } 00:15:24.813 ] 00:15:24.813 }, 00:15:24.813 { 00:15:24.813 "subsystem": "nbd", 00:15:24.813 "config": [] 00:15:24.813 } 00:15:24.813 ] 00:15:24.813 }' 00:15:24.813 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.813 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.813 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.813 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.813 [2024-11-29 13:01:56.216854] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:24.813 [2024-11-29 13:01:56.216977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72584 ] 00:15:25.072 [2024-11-29 13:01:56.359184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.072 [2024-11-29 13:01:56.436517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.330 [2024-11-29 13:01:56.591404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.330 [2024-11-29 13:01:56.658148] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.896 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.896 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.896 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.896 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:26.156 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.156 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.156 Running I/O for 1 seconds... 00:15:27.377 3624.00 IOPS, 14.16 MiB/s 00:15:27.378 Latency(us) 00:15:27.378 [2024-11-29T13:01:58.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.378 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.378 Verification LBA range: start 0x0 length 0x2000 00:15:27.378 nvme0n1 : 1.02 3663.91 14.31 0.00 0.00 34421.04 4974.78 24307.90 00:15:27.378 [2024-11-29T13:01:58.893Z] =================================================================================================================== 00:15:27.378 [2024-11-29T13:01:58.893Z] Total : 3663.91 14.31 0.00 0.00 34421.04 4974.78 24307.90 00:15:27.378 { 00:15:27.378 "results": [ 00:15:27.378 { 00:15:27.378 "job": "nvme0n1", 00:15:27.378 "core_mask": "0x2", 00:15:27.378 "workload": "verify", 00:15:27.378 "status": "finished", 00:15:27.378 "verify_range": { 00:15:27.378 "start": 0, 00:15:27.378 "length": 8192 00:15:27.378 }, 00:15:27.378 "queue_depth": 128, 00:15:27.378 "io_size": 4096, 00:15:27.378 "runtime": 1.024044, 00:15:27.378 "iops": 3663.9050665791706, 00:15:27.378 "mibps": 14.312129166324885, 00:15:27.378 "io_failed": 0, 00:15:27.378 "io_timeout": 0, 00:15:27.378 "avg_latency_us": 34421.040480713316, 00:15:27.378 "min_latency_us": 4974.778181818182, 00:15:27.378 "max_latency_us": 24307.898181818182 00:15:27.378 } 00:15:27.378 ], 00:15:27.378 "core_count": 1 00:15:27.378 } 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:27.378 nvmf_trace.0 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72584 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72584 ']' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72584 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72584 00:15:27.378 killing process with pid 72584 00:15:27.378 Received shutdown signal, test time was about 1.000000 seconds 00:15:27.378 00:15:27.378 Latency(us) 00:15:27.378 [2024-11-29T13:01:58.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.378 [2024-11-29T13:01:58.893Z] =================================================================================================================== 00:15:27.378 [2024-11-29T13:01:58.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72584' 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72584 00:15:27.378 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72584 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.637 rmmod nvme_tcp 00:15:27.637 rmmod nvme_fabrics 00:15:27.637 rmmod nvme_keyring 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72552 ']' 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72552 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72552 ']' 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72552 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:27.637 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72552 00:15:27.895 killing process with pid 72552 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72552' 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72552 00:15:27.895 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72552 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:28.153 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hZbdiziWuO /tmp/tmp.OSkxWV24Tw /tmp/tmp.nIbxcjPvpI 00:15:28.412 ************************************ 00:15:28.412 END TEST nvmf_tls 00:15:28.412 ************************************ 00:15:28.412 00:15:28.412 real 1m26.565s 00:15:28.412 user 2m18.300s 00:15:28.412 sys 0m29.530s 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 ************************************ 00:15:28.412 START TEST nvmf_fips 00:15:28.412 ************************************ 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:28.412 * Looking for test storage... 00:15:28.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.412 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.671 --rc genhtml_branch_coverage=1 00:15:28.671 --rc genhtml_function_coverage=1 00:15:28.671 --rc genhtml_legend=1 00:15:28.671 --rc geninfo_all_blocks=1 00:15:28.671 --rc geninfo_unexecuted_blocks=1 00:15:28.671 00:15:28.671 ' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.671 --rc genhtml_branch_coverage=1 00:15:28.671 --rc genhtml_function_coverage=1 00:15:28.671 --rc genhtml_legend=1 00:15:28.671 --rc geninfo_all_blocks=1 00:15:28.671 --rc geninfo_unexecuted_blocks=1 00:15:28.671 00:15:28.671 ' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.671 --rc genhtml_branch_coverage=1 00:15:28.671 --rc genhtml_function_coverage=1 00:15:28.671 --rc genhtml_legend=1 00:15:28.671 --rc geninfo_all_blocks=1 00:15:28.671 --rc geninfo_unexecuted_blocks=1 00:15:28.671 00:15:28.671 ' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.671 --rc genhtml_branch_coverage=1 00:15:28.671 --rc genhtml_function_coverage=1 00:15:28.671 --rc genhtml_legend=1 00:15:28.671 --rc geninfo_all_blocks=1 00:15:28.671 --rc geninfo_unexecuted_blocks=1 00:15:28.671 00:15:28.671 ' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.671 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.672 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:28.672 Error setting digest 00:15:28.672 401278598F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:28.672 401278598F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.672 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:28.930 Cannot find device "nvmf_init_br" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:28.930 Cannot find device "nvmf_init_br2" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:28.930 Cannot find device "nvmf_tgt_br" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.930 Cannot find device "nvmf_tgt_br2" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:28.930 Cannot find device "nvmf_init_br" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:28.930 Cannot find device "nvmf_init_br2" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:28.930 Cannot find device "nvmf_tgt_br" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:28.930 Cannot find device "nvmf_tgt_br2" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:28.930 Cannot find device "nvmf_br" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:28.930 Cannot find device "nvmf_init_if" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:28.930 Cannot find device "nvmf_init_if2" 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.930 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:29.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:29.189 00:15:29.189 --- 10.0.0.3 ping statistics --- 00:15:29.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.189 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:29.189 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:29.189 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:15:29.189 00:15:29.189 --- 10.0.0.4 ping statistics --- 00:15:29.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.189 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:29.189 00:15:29.189 --- 10.0.0.1 ping statistics --- 00:15:29.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.189 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:29.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:15:29.189 00:15:29.189 --- 10.0.0.2 ping statistics --- 00:15:29.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.189 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72911 00:15:29.189 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72911 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72911 ']' 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.190 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.190 [2024-11-29 13:02:00.665287] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:29.190 [2024-11-29 13:02:00.665409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.448 [2024-11-29 13:02:00.814091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.448 [2024-11-29 13:02:00.876199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.448 [2024-11-29 13:02:00.876469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.448 [2024-11-29 13:02:00.876505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.448 [2024-11-29 13:02:00.876514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.448 [2024-11-29 13:02:00.876521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.448 [2024-11-29 13:02:00.876961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.448 [2024-11-29 13:02:00.930825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.706 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.706 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:29.706 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.706 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.706 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.706 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.706 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Qo5 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Qo5 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Qo5 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Qo5 00:15:29.707 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.966 [2024-11-29 13:02:01.348881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.966 [2024-11-29 13:02:01.364842] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.966 [2024-11-29 13:02:01.365082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.966 malloc0 00:15:29.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72939 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72939 /var/tmp/bdevperf.sock 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72939 ']' 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.966 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:30.225 [2024-11-29 13:02:01.519474] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:30.225 [2024-11-29 13:02:01.519611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:15:30.225 [2024-11-29 13:02:01.674545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.225 [2024-11-29 13:02:01.736903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.483 [2024-11-29 13:02:01.795584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.483 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.484 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:30.484 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Qo5 00:15:30.742 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:31.001 [2024-11-29 13:02:02.420862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.001 TLSTESTn1 00:15:31.001 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.260 Running I/O for 10 seconds... 00:15:33.601 4260.00 IOPS, 16.64 MiB/s [2024-11-29T13:02:05.700Z] 4290.50 IOPS, 16.76 MiB/s [2024-11-29T13:02:07.075Z] 4308.00 IOPS, 16.83 MiB/s [2024-11-29T13:02:08.011Z] 4305.00 IOPS, 16.82 MiB/s [2024-11-29T13:02:08.947Z] 4288.00 IOPS, 16.75 MiB/s [2024-11-29T13:02:09.882Z] 4247.67 IOPS, 16.59 MiB/s [2024-11-29T13:02:10.819Z] 4256.29 IOPS, 16.63 MiB/s [2024-11-29T13:02:11.764Z] 4263.38 IOPS, 16.65 MiB/s [2024-11-29T13:02:12.701Z] 4259.89 IOPS, 16.64 MiB/s [2024-11-29T13:02:12.701Z] 4259.60 IOPS, 16.64 MiB/s 00:15:41.186 Latency(us) 00:15:41.186 [2024-11-29T13:02:12.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.186 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:41.186 Verification LBA range: start 0x0 length 0x2000 00:15:41.186 TLSTESTn1 : 10.02 4265.32 16.66 0.00 0.00 29954.41 5987.61 31218.97 00:15:41.186 [2024-11-29T13:02:12.701Z] =================================================================================================================== 00:15:41.186 [2024-11-29T13:02:12.701Z] Total : 4265.32 16.66 0.00 0.00 29954.41 5987.61 31218.97 00:15:41.186 { 00:15:41.186 "results": [ 00:15:41.186 { 00:15:41.186 "job": "TLSTESTn1", 00:15:41.186 "core_mask": "0x4", 00:15:41.186 "workload": "verify", 00:15:41.186 "status": "finished", 00:15:41.186 "verify_range": { 00:15:41.186 "start": 0, 00:15:41.186 "length": 8192 00:15:41.186 }, 00:15:41.186 "queue_depth": 128, 00:15:41.186 "io_size": 4096, 00:15:41.186 "runtime": 10.016589, 00:15:41.186 "iops": 4265.32425359571, 00:15:41.186 "mibps": 16.66142286560824, 00:15:41.186 "io_failed": 0, 00:15:41.186 "io_timeout": 0, 00:15:41.186 "avg_latency_us": 29954.41220587109, 00:15:41.186 "min_latency_us": 5987.607272727273, 00:15:41.186 "max_latency_us": 31218.967272727274 00:15:41.186 } 00:15:41.186 ], 00:15:41.186 "core_count": 1 00:15:41.186 } 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:41.186 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:41.445 nvmf_trace.0 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72939 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72939 ']' 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72939 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72939 00:15:41.445 killing process with pid 72939 00:15:41.445 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.445 00:15:41.445 Latency(us) 00:15:41.445 [2024-11-29T13:02:12.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.445 [2024-11-29T13:02:12.960Z] =================================================================================================================== 00:15:41.445 [2024-11-29T13:02:12.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72939' 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72939 00:15:41.445 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72939 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.704 rmmod nvme_tcp 00:15:41.704 rmmod nvme_fabrics 00:15:41.704 rmmod nvme_keyring 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72911 ']' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72911 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72911 ']' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72911 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72911 00:15:41.704 killing process with pid 72911 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72911' 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72911 00:15:41.704 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72911 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.963 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.964 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Qo5 00:15:42.222 00:15:42.222 real 0m13.819s 00:15:42.222 user 0m18.945s 00:15:42.222 sys 0m5.780s 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 ************************************ 00:15:42.222 END TEST nvmf_fips 00:15:42.222 ************************************ 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 ************************************ 00:15:42.222 START TEST nvmf_control_msg_list 00:15:42.222 ************************************ 00:15:42.222 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:42.482 * Looking for test storage... 00:15:42.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.482 --rc genhtml_branch_coverage=1 00:15:42.482 --rc genhtml_function_coverage=1 00:15:42.482 --rc genhtml_legend=1 00:15:42.482 --rc geninfo_all_blocks=1 00:15:42.482 --rc geninfo_unexecuted_blocks=1 00:15:42.482 00:15:42.482 ' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.482 --rc genhtml_branch_coverage=1 00:15:42.482 --rc genhtml_function_coverage=1 00:15:42.482 --rc genhtml_legend=1 00:15:42.482 --rc geninfo_all_blocks=1 00:15:42.482 --rc geninfo_unexecuted_blocks=1 00:15:42.482 00:15:42.482 ' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.482 --rc genhtml_branch_coverage=1 00:15:42.482 --rc genhtml_function_coverage=1 00:15:42.482 --rc genhtml_legend=1 00:15:42.482 --rc geninfo_all_blocks=1 00:15:42.482 --rc geninfo_unexecuted_blocks=1 00:15:42.482 00:15:42.482 ' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.482 --rc genhtml_branch_coverage=1 00:15:42.482 --rc genhtml_function_coverage=1 00:15:42.482 --rc genhtml_legend=1 00:15:42.482 --rc geninfo_all_blocks=1 00:15:42.482 --rc geninfo_unexecuted_blocks=1 00:15:42.482 00:15:42.482 ' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.482 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:42.483 Cannot find device "nvmf_init_br" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:42.483 Cannot find device "nvmf_init_br2" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:42.483 Cannot find device "nvmf_tgt_br" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.483 Cannot find device "nvmf_tgt_br2" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:42.483 Cannot find device "nvmf_init_br" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:42.483 Cannot find device "nvmf_init_br2" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:42.483 Cannot find device "nvmf_tgt_br" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.483 Cannot find device "nvmf_tgt_br2" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.483 Cannot find device "nvmf_br" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.483 Cannot find device "nvmf_init_if" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.483 Cannot find device "nvmf_init_if2" 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:42.483 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.742 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:42.742 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:42.742 00:15:42.742 --- 10.0.0.3 ping statistics --- 00:15:42.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.742 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.742 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.742 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:42.742 00:15:42.742 --- 10.0.0.4 ping statistics --- 00:15:42.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.742 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:42.742 00:15:42.742 --- 10.0.0.1 ping statistics --- 00:15:42.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.742 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:42.742 00:15:42.742 --- 10.0.0.2 ping statistics --- 00:15:42.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.742 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.742 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73330 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73330 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73330 ']' 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.001 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.001 [2024-11-29 13:02:14.328274] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:43.001 [2024-11-29 13:02:14.328364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.001 [2024-11-29 13:02:14.479229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.261 [2024-11-29 13:02:14.547265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.261 [2024-11-29 13:02:14.547332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.261 [2024-11-29 13:02:14.547346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.261 [2024-11-29 13:02:14.547356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.261 [2024-11-29 13:02:14.547366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.261 [2024-11-29 13:02:14.547870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.261 [2024-11-29 13:02:14.607503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 [2024-11-29 13:02:14.733542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 Malloc0 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.261 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.261 [2024-11-29 13:02:14.773724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73349 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73350 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73351 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:43.522 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73349 00:15:43.522 [2024-11-29 13:02:14.972472] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:43.522 [2024-11-29 13:02:14.972748] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:43.522 [2024-11-29 13:02:14.972950] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:44.898 Initializing NVMe Controllers 00:15:44.898 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:44.898 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:44.898 Initialization complete. Launching workers. 00:15:44.898 ======================================================== 00:15:44.898 Latency(us) 00:15:44.898 Device Information : IOPS MiB/s Average min max 00:15:44.898 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3340.00 13.05 299.09 216.68 522.56 00:15:44.898 ======================================================== 00:15:44.898 Total : 3340.00 13.05 299.09 216.68 522.56 00:15:44.898 00:15:44.898 Initializing NVMe Controllers 00:15:44.898 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:44.898 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:44.898 Initialization complete. Launching workers. 00:15:44.898 ======================================================== 00:15:44.898 Latency(us) 00:15:44.898 Device Information : IOPS MiB/s Average min max 00:15:44.898 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3337.96 13.04 299.14 213.23 495.87 00:15:44.898 ======================================================== 00:15:44.898 Total : 3337.96 13.04 299.14 213.23 495.87 00:15:44.898 00:15:44.898 Initializing NVMe Controllers 00:15:44.898 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:44.898 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:44.898 Initialization complete. Launching workers. 00:15:44.898 ======================================================== 00:15:44.898 Latency(us) 00:15:44.898 Device Information : IOPS MiB/s Average min max 00:15:44.898 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3328.00 13.00 300.00 207.23 826.54 00:15:44.898 ======================================================== 00:15:44.898 Total : 3328.00 13.00 300.00 207.23 826.54 00:15:44.898 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73350 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73351 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.898 rmmod nvme_tcp 00:15:44.898 rmmod nvme_fabrics 00:15:44.898 rmmod nvme_keyring 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73330 ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73330 ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73330' 00:15:44.898 killing process with pid 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73330 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:44.898 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:45.157 00:15:45.157 real 0m2.970s 00:15:45.157 user 0m4.923s 00:15:45.157 sys 0m1.256s 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.157 ************************************ 00:15:45.157 END TEST nvmf_control_msg_list 00:15:45.157 ************************************ 00:15:45.157 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.417 ************************************ 00:15:45.417 START TEST nvmf_wait_for_buf 00:15:45.417 ************************************ 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:45.417 * Looking for test storage... 00:15:45.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.417 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:45.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.418 --rc genhtml_branch_coverage=1 00:15:45.418 --rc genhtml_function_coverage=1 00:15:45.418 --rc genhtml_legend=1 00:15:45.418 --rc geninfo_all_blocks=1 00:15:45.418 --rc geninfo_unexecuted_blocks=1 00:15:45.418 00:15:45.418 ' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:45.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.418 --rc genhtml_branch_coverage=1 00:15:45.418 --rc genhtml_function_coverage=1 00:15:45.418 --rc genhtml_legend=1 00:15:45.418 --rc geninfo_all_blocks=1 00:15:45.418 --rc geninfo_unexecuted_blocks=1 00:15:45.418 00:15:45.418 ' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:45.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.418 --rc genhtml_branch_coverage=1 00:15:45.418 --rc genhtml_function_coverage=1 00:15:45.418 --rc genhtml_legend=1 00:15:45.418 --rc geninfo_all_blocks=1 00:15:45.418 --rc geninfo_unexecuted_blocks=1 00:15:45.418 00:15:45.418 ' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:45.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.418 --rc genhtml_branch_coverage=1 00:15:45.418 --rc genhtml_function_coverage=1 00:15:45.418 --rc genhtml_legend=1 00:15:45.418 --rc geninfo_all_blocks=1 00:15:45.418 --rc geninfo_unexecuted_blocks=1 00:15:45.418 00:15:45.418 ' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.418 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.419 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.678 Cannot find device "nvmf_init_br" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.678 Cannot find device "nvmf_init_br2" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.678 Cannot find device "nvmf_tgt_br" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.678 Cannot find device "nvmf_tgt_br2" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.678 Cannot find device "nvmf_init_br" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.678 Cannot find device "nvmf_init_br2" 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:45.678 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:45.678 Cannot find device "nvmf_tgt_br" 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:45.678 Cannot find device "nvmf_tgt_br2" 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:45.678 Cannot find device "nvmf_br" 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:45.678 Cannot find device "nvmf_init_if" 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:45.678 Cannot find device "nvmf_init_if2" 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:45.678 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:45.937 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:45.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:45.938 00:15:45.938 --- 10.0.0.3 ping statistics --- 00:15:45.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.938 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:45.938 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:45.938 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:15:45.938 00:15:45.938 --- 10.0.0.4 ping statistics --- 00:15:45.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.938 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:45.938 00:15:45.938 --- 10.0.0.1 ping statistics --- 00:15:45.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.938 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:45.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:45.938 00:15:45.938 --- 10.0.0.2 ping statistics --- 00:15:45.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.938 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73594 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73594 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73594 ']' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.938 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:45.938 [2024-11-29 13:02:17.431190] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:45.938 [2024-11-29 13:02:17.431485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.197 [2024-11-29 13:02:17.584858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.197 [2024-11-29 13:02:17.647623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.197 [2024-11-29 13:02:17.647676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.197 [2024-11-29 13:02:17.647690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.197 [2024-11-29 13:02:17.647700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.197 [2024-11-29 13:02:17.647709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.197 [2024-11-29 13:02:17.648169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.197 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.197 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:46.197 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.197 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.197 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 [2024-11-29 13:02:17.798864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 Malloc0 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 [2024-11-29 13:02:17.873207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 [2024-11-29 13:02:17.901404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:46.714 [2024-11-29 13:02:18.106096] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.091 Initializing NVMe Controllers 00:15:48.091 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:48.091 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:48.091 Initialization complete. Launching workers. 00:15:48.091 ======================================================== 00:15:48.091 Latency(us) 00:15:48.091 Device Information : IOPS MiB/s Average min max 00:15:48.091 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 578.32 72.29 6917.27 1930.21 10002.68 00:15:48.091 ======================================================== 00:15:48.091 Total : 578.32 72.29 6917.27 1930.21 10002.68 00:15:48.091 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=5510 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 5510 -eq 0 ]] 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.091 rmmod nvme_tcp 00:15:48.091 rmmod nvme_fabrics 00:15:48.091 rmmod nvme_keyring 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73594 ']' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73594 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73594 ']' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73594 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73594 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.091 killing process with pid 73594 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73594' 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73594 00:15:48.091 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73594 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:48.349 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:48.607 00:15:48.607 real 0m3.306s 00:15:48.607 user 0m2.576s 00:15:48.607 sys 0m0.833s 00:15:48.607 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.608 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:48.608 ************************************ 00:15:48.608 END TEST nvmf_wait_for_buf 00:15:48.608 ************************************ 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.608 ************************************ 00:15:48.608 START TEST nvmf_nsid 00:15:48.608 ************************************ 00:15:48.608 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:48.608 * Looking for test storage... 00:15:48.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.867 --rc genhtml_branch_coverage=1 00:15:48.867 --rc genhtml_function_coverage=1 00:15:48.867 --rc genhtml_legend=1 00:15:48.867 --rc geninfo_all_blocks=1 00:15:48.867 --rc geninfo_unexecuted_blocks=1 00:15:48.867 00:15:48.867 ' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.867 --rc genhtml_branch_coverage=1 00:15:48.867 --rc genhtml_function_coverage=1 00:15:48.867 --rc genhtml_legend=1 00:15:48.867 --rc geninfo_all_blocks=1 00:15:48.867 --rc geninfo_unexecuted_blocks=1 00:15:48.867 00:15:48.867 ' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.867 --rc genhtml_branch_coverage=1 00:15:48.867 --rc genhtml_function_coverage=1 00:15:48.867 --rc genhtml_legend=1 00:15:48.867 --rc geninfo_all_blocks=1 00:15:48.867 --rc geninfo_unexecuted_blocks=1 00:15:48.867 00:15:48.867 ' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.867 --rc genhtml_branch_coverage=1 00:15:48.867 --rc genhtml_function_coverage=1 00:15:48.867 --rc genhtml_legend=1 00:15:48.867 --rc geninfo_all_blocks=1 00:15:48.867 --rc geninfo_unexecuted_blocks=1 00:15:48.867 00:15:48.867 ' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.867 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.868 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.868 Cannot find device "nvmf_init_br" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.868 Cannot find device "nvmf_init_br2" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.868 Cannot find device "nvmf_tgt_br" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.868 Cannot find device "nvmf_tgt_br2" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.868 Cannot find device "nvmf_init_br" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.868 Cannot find device "nvmf_init_br2" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.868 Cannot find device "nvmf_tgt_br" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.868 Cannot find device "nvmf_tgt_br2" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.868 Cannot find device "nvmf_br" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.868 Cannot find device "nvmf_init_if" 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:48.868 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.126 Cannot find device "nvmf_init_if2" 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.126 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:49.386 00:15:49.386 --- 10.0.0.3 ping statistics --- 00:15:49.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.386 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.386 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.386 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:49.386 00:15:49.386 --- 10.0.0.4 ping statistics --- 00:15:49.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.386 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:49.386 00:15:49.386 --- 10.0.0.1 ping statistics --- 00:15:49.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.386 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:49.386 00:15:49.386 --- 10.0.0.2 ping statistics --- 00:15:49.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.386 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73854 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73854 00:15:49.386 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73854 ']' 00:15:49.387 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.387 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.387 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.387 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.387 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:49.387 [2024-11-29 13:02:20.731847] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:49.387 [2024-11-29 13:02:20.731949] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.387 [2024-11-29 13:02:20.869471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.650 [2024-11-29 13:02:20.941665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.650 [2024-11-29 13:02:20.941757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.650 [2024-11-29 13:02:20.941792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.650 [2024-11-29 13:02:20.941806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.650 [2024-11-29 13:02:20.941818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.650 [2024-11-29 13:02:20.942337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.650 [2024-11-29 13:02:20.999749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.586 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.586 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:50.586 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73886 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=60deacd7-7f98-4ac7-bc6e-8e9d7910a6b0 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8d3fa463-33ee-4558-a6bc-93f8a3b001b5 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ae747f9d-0750-4caa-b358-70edc3d38f27 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.587 null0 00:15:50.587 null1 00:15:50.587 null2 00:15:50.587 [2024-11-29 13:02:21.861048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.587 [2024-11-29 13:02:21.882518] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:50.587 [2024-11-29 13:02:21.882655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73886 ] 00:15:50.587 [2024-11-29 13:02:21.885183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73886 /var/tmp/tgt2.sock 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73886 ']' 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.587 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.587 [2024-11-29 13:02:22.052154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.845 [2024-11-29 13:02:22.125606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.845 [2024-11-29 13:02:22.201651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.103 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.103 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:51.103 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:51.361 [2024-11-29 13:02:22.837195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.361 [2024-11-29 13:02:22.853275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:51.619 nvme0n1 nvme0n2 00:15:51.619 nvme1n1 00:15:51.619 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:51.619 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:51.619 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:15:51.619 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:15:52.556 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.556 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:52.556 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.556 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:52.556 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 60deacd7-7f98-4ac7-bc6e-8e9d7910a6b0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=60deacd77f984ac7bc6e8e9d7910a6b0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 60DEACD77F984AC7BC6E8E9D7910A6B0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 60DEACD77F984AC7BC6E8E9D7910A6B0 == \6\0\D\E\A\C\D\7\7\F\9\8\4\A\C\7\B\C\6\E\8\E\9\D\7\9\1\0\A\6\B\0 ]] 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8d3fa463-33ee-4558-a6bc-93f8a3b001b5 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8d3fa46333ee4558a6bc93f8a3b001b5 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8D3FA46333EE4558A6BC93F8A3B001B5 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8D3FA46333EE4558A6BC93F8A3B001B5 == \8\D\3\F\A\4\6\3\3\3\E\E\4\5\5\8\A\6\B\C\9\3\F\8\A\3\B\0\0\1\B\5 ]] 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ae747f9d-0750-4caa-b358-70edc3d38f27 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ae747f9d07504caab35870edc3d38f27 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AE747F9D07504CAAB35870EDC3D38F27 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AE747F9D07504CAAB35870EDC3D38F27 == \A\E\7\4\7\F\9\D\0\7\5\0\4\C\A\A\B\3\5\8\7\0\E\D\C\3\D\3\8\F\2\7 ]] 00:15:52.815 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73886 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73886 ']' 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73886 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73886 00:15:53.075 killing process with pid 73886 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73886' 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73886 00:15:53.075 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73886 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.642 rmmod nvme_tcp 00:15:53.642 rmmod nvme_fabrics 00:15:53.642 rmmod nvme_keyring 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:53.642 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73854 ']' 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73854 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73854 ']' 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73854 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73854 00:15:53.642 killing process with pid 73854 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73854' 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73854 00:15:53.642 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73854 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.900 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.158 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.158 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.158 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.158 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:54.159 ************************************ 00:15:54.159 END TEST nvmf_nsid 00:15:54.159 00:15:54.159 real 0m5.453s 00:15:54.159 user 0m8.004s 00:15:54.159 sys 0m1.711s 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:54.159 ************************************ 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:54.159 00:15:54.159 real 5m11.453s 00:15:54.159 user 10m50.800s 00:15:54.159 sys 1m11.385s 00:15:54.159 ************************************ 00:15:54.159 END TEST nvmf_target_extra 00:15:54.159 ************************************ 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.159 13:02:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.159 13:02:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:54.159 13:02:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.159 13:02:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.159 13:02:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.159 ************************************ 00:15:54.159 START TEST nvmf_host 00:15:54.159 ************************************ 00:15:54.159 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:54.159 * Looking for test storage... 00:15:54.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.419 --rc genhtml_branch_coverage=1 00:15:54.419 --rc genhtml_function_coverage=1 00:15:54.419 --rc genhtml_legend=1 00:15:54.419 --rc geninfo_all_blocks=1 00:15:54.419 --rc geninfo_unexecuted_blocks=1 00:15:54.419 00:15:54.419 ' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.419 --rc genhtml_branch_coverage=1 00:15:54.419 --rc genhtml_function_coverage=1 00:15:54.419 --rc genhtml_legend=1 00:15:54.419 --rc geninfo_all_blocks=1 00:15:54.419 --rc geninfo_unexecuted_blocks=1 00:15:54.419 00:15:54.419 ' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.419 --rc genhtml_branch_coverage=1 00:15:54.419 --rc genhtml_function_coverage=1 00:15:54.419 --rc genhtml_legend=1 00:15:54.419 --rc geninfo_all_blocks=1 00:15:54.419 --rc geninfo_unexecuted_blocks=1 00:15:54.419 00:15:54.419 ' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.419 --rc genhtml_branch_coverage=1 00:15:54.419 --rc genhtml_function_coverage=1 00:15:54.419 --rc genhtml_legend=1 00:15:54.419 --rc geninfo_all_blocks=1 00:15:54.419 --rc geninfo_unexecuted_blocks=1 00:15:54.419 00:15:54.419 ' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.419 ************************************ 00:15:54.419 START TEST nvmf_identify 00:15:54.419 ************************************ 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:54.419 * Looking for test storage... 00:15:54.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.419 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.679 13:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.679 --rc genhtml_branch_coverage=1 00:15:54.679 --rc genhtml_function_coverage=1 00:15:54.679 --rc genhtml_legend=1 00:15:54.679 --rc geninfo_all_blocks=1 00:15:54.679 --rc geninfo_unexecuted_blocks=1 00:15:54.679 00:15:54.679 ' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.679 --rc genhtml_branch_coverage=1 00:15:54.679 --rc genhtml_function_coverage=1 00:15:54.679 --rc genhtml_legend=1 00:15:54.679 --rc geninfo_all_blocks=1 00:15:54.679 --rc geninfo_unexecuted_blocks=1 00:15:54.679 00:15:54.679 ' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.679 --rc genhtml_branch_coverage=1 00:15:54.679 --rc genhtml_function_coverage=1 00:15:54.679 --rc genhtml_legend=1 00:15:54.679 --rc geninfo_all_blocks=1 00:15:54.679 --rc geninfo_unexecuted_blocks=1 00:15:54.679 00:15:54.679 ' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.679 --rc genhtml_branch_coverage=1 00:15:54.679 --rc genhtml_function_coverage=1 00:15:54.679 --rc genhtml_legend=1 00:15:54.679 --rc geninfo_all_blocks=1 00:15:54.679 --rc geninfo_unexecuted_blocks=1 00:15:54.679 00:15:54.679 ' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.679 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.680 Cannot find device "nvmf_init_br" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.680 Cannot find device "nvmf_init_br2" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.680 Cannot find device "nvmf_tgt_br" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.680 Cannot find device "nvmf_tgt_br2" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.680 Cannot find device "nvmf_init_br" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.680 Cannot find device "nvmf_init_br2" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.680 Cannot find device "nvmf_tgt_br" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.680 Cannot find device "nvmf_tgt_br2" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.680 Cannot find device "nvmf_br" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.680 Cannot find device "nvmf_init_if" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.680 Cannot find device "nvmf_init_if2" 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.680 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.939 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:54.940 00:15:54.940 --- 10.0.0.3 ping statistics --- 00:15:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.940 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:54.940 00:15:54.940 --- 10.0.0.4 ping statistics --- 00:15:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.940 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:54.940 00:15:54.940 --- 10.0.0.1 ping statistics --- 00:15:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.940 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:54.940 00:15:54.940 --- 10.0.0.2 ping statistics --- 00:15:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.940 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.940 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74247 00:15:55.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74247 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74247 ']' 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.200 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 [2024-11-29 13:02:26.518006] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:55.200 [2024-11-29 13:02:26.518451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.200 [2024-11-29 13:02:26.672426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.459 [2024-11-29 13:02:26.738450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.459 [2024-11-29 13:02:26.738518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.459 [2024-11-29 13:02:26.738537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.459 [2024-11-29 13:02:26.738547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.459 [2024-11-29 13:02:26.738557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.459 [2024-11-29 13:02:26.739959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.460 [2024-11-29 13:02:26.740064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.460 [2024-11-29 13:02:26.740195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.460 [2024-11-29 13:02:26.740202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.460 [2024-11-29 13:02:26.800227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 [2024-11-29 13:02:26.884509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.460 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 Malloc0 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.719 13:02:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 [2024-11-29 13:02:27.003588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.719 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 [ 00:15:55.719 { 00:15:55.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.719 "subtype": "Discovery", 00:15:55.719 "listen_addresses": [ 00:15:55.719 { 00:15:55.719 "trtype": "TCP", 00:15:55.719 "adrfam": "IPv4", 00:15:55.719 "traddr": "10.0.0.3", 00:15:55.719 "trsvcid": "4420" 00:15:55.719 } 00:15:55.719 ], 00:15:55.719 "allow_any_host": true, 00:15:55.719 "hosts": [] 00:15:55.719 }, 00:15:55.719 { 00:15:55.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.719 "subtype": "NVMe", 00:15:55.719 "listen_addresses": [ 00:15:55.719 { 00:15:55.719 "trtype": "TCP", 00:15:55.719 "adrfam": "IPv4", 00:15:55.719 "traddr": "10.0.0.3", 00:15:55.719 "trsvcid": "4420" 00:15:55.719 } 00:15:55.719 ], 00:15:55.719 "allow_any_host": true, 00:15:55.719 "hosts": [], 00:15:55.719 "serial_number": "SPDK00000000000001", 00:15:55.719 "model_number": "SPDK bdev Controller", 00:15:55.719 "max_namespaces": 32, 00:15:55.719 "min_cntlid": 1, 00:15:55.719 "max_cntlid": 65519, 00:15:55.720 "namespaces": [ 00:15:55.720 { 00:15:55.720 "nsid": 1, 00:15:55.720 "bdev_name": "Malloc0", 00:15:55.720 "name": "Malloc0", 00:15:55.720 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:55.720 "eui64": "ABCDEF0123456789", 00:15:55.720 "uuid": "2ab01bc3-5724-468f-b452-646708cd0ffb" 00:15:55.720 } 00:15:55.720 ] 00:15:55.720 } 00:15:55.720 ] 00:15:55.720 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.720 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:55.720 [2024-11-29 13:02:27.062759] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:55.720 [2024-11-29 13:02:27.062825] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74274 ] 00:15:55.720 [2024-11-29 13:02:27.221790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:55.720 [2024-11-29 13:02:27.221889] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:55.720 [2024-11-29 13:02:27.221898] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:55.720 [2024-11-29 13:02:27.221945] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:55.720 [2024-11-29 13:02:27.221961] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:55.720 [2024-11-29 13:02:27.222309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:55.720 [2024-11-29 13:02:27.222374] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e6d750 0 00:15:55.720 [2024-11-29 13:02:27.227905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:55.720 [2024-11-29 13:02:27.227932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:55.720 [2024-11-29 13:02:27.227938] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:55.720 [2024-11-29 13:02:27.227942] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:55.720 [2024-11-29 13:02:27.228000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.720 [2024-11-29 13:02:27.228011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.720 [2024-11-29 13:02:27.228015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.720 [2024-11-29 13:02:27.228031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:55.720 [2024-11-29 13:02:27.228066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.985 [2024-11-29 13:02:27.235929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.985 [2024-11-29 13:02:27.235952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.985 [2024-11-29 13:02:27.235974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.235980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.985 [2024-11-29 13:02:27.235992] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:55.985 [2024-11-29 13:02:27.236001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:55.985 [2024-11-29 13:02:27.236008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:55.985 [2024-11-29 13:02:27.236028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.236034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.236039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.985 [2024-11-29 13:02:27.236049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.985 [2024-11-29 13:02:27.236080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.985 [2024-11-29 13:02:27.236157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.985 [2024-11-29 13:02:27.236165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.985 [2024-11-29 13:02:27.236169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.236173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.985 [2024-11-29 13:02:27.236180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:55.985 [2024-11-29 13:02:27.236188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:55.985 [2024-11-29 13:02:27.236196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.236216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.985 [2024-11-29 13:02:27.236221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.985 [2024-11-29 13:02:27.236229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.985 [2024-11-29 13:02:27.236250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.985 [2024-11-29 13:02:27.236294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.985 [2024-11-29 13:02:27.236302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.985 [2024-11-29 13:02:27.236306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.236316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:55.986 [2024-11-29 13:02:27.236325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.236367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.986 [2024-11-29 13:02:27.236389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.236431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.236438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.236442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.236453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.236481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.986 [2024-11-29 13:02:27.236499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.236542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.236550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.236553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.236563] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:55.986 [2024-11-29 13:02:27.236569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236689] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:55.986 [2024-11-29 13:02:27.236696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.236722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.986 [2024-11-29 13:02:27.236742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.236793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.236800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.236804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.236814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:55.986 [2024-11-29 13:02:27.236825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.236841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.986 [2024-11-29 13:02:27.236860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.236904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.236913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.236917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.236927] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:55.986 [2024-11-29 13:02:27.236933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:55.986 [2024-11-29 13:02:27.236942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:55.986 [2024-11-29 13:02:27.236953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:55.986 [2024-11-29 13:02:27.236964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.236969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.236978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.986 [2024-11-29 13:02:27.236999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.237093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.986 [2024-11-29 13:02:27.237102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.986 [2024-11-29 13:02:27.237106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e6d750): datao=0, datal=4096, cccid=0 00:15:55.986 [2024-11-29 13:02:27.237115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ed1740) on tqpair(0x1e6d750): expected_datao=0, payload_size=4096 00:15:55.986 [2024-11-29 13:02:27.237121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237134] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.237151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.237154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.237168] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:55.986 [2024-11-29 13:02:27.237174] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:55.986 [2024-11-29 13:02:27.237178] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:55.986 [2024-11-29 13:02:27.237189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:55.986 [2024-11-29 13:02:27.237194] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:55.986 [2024-11-29 13:02:27.237200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:55.986 [2024-11-29 13:02:27.237210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:55.986 [2024-11-29 13:02:27.237218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.237235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.986 [2024-11-29 13:02:27.237257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.986 [2024-11-29 13:02:27.237313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.986 [2024-11-29 13:02:27.237320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.986 [2024-11-29 13:02:27.237324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.986 [2024-11-29 13:02:27.237337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.237352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.986 [2024-11-29 13:02:27.237359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.237373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.986 [2024-11-29 13:02:27.237380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e6d750) 00:15:55.986 [2024-11-29 13:02:27.237394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.986 [2024-11-29 13:02:27.237400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.986 [2024-11-29 13:02:27.237405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.237414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.987 [2024-11-29 13:02:27.237420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:55.987 [2024-11-29 13:02:27.237429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:55.987 [2024-11-29 13:02:27.237437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.237448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.987 [2024-11-29 13:02:27.237478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1740, cid 0, qid 0 00:15:55.987 [2024-11-29 13:02:27.237486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed18c0, cid 1, qid 0 00:15:55.987 [2024-11-29 13:02:27.237491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1a40, cid 2, qid 0 00:15:55.987 [2024-11-29 13:02:27.237496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.987 [2024-11-29 13:02:27.237501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1d40, cid 4, qid 0 00:15:55.987 [2024-11-29 13:02:27.237587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.237594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.237598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1d40) on tqpair=0x1e6d750 00:15:55.987 [2024-11-29 13:02:27.237609] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:55.987 [2024-11-29 13:02:27.237614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:55.987 [2024-11-29 13:02:27.237626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.237639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.987 [2024-11-29 13:02:27.237657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1d40, cid 4, qid 0 00:15:55.987 [2024-11-29 13:02:27.237712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.987 [2024-11-29 13:02:27.237720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.987 [2024-11-29 13:02:27.237724] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e6d750): datao=0, datal=4096, cccid=4 00:15:55.987 [2024-11-29 13:02:27.237733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ed1d40) on tqpair(0x1e6d750): expected_datao=0, payload_size=4096 00:15:55.987 [2024-11-29 13:02:27.237737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237745] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.237765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.237769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1d40) on tqpair=0x1e6d750 00:15:55.987 [2024-11-29 13:02:27.237787] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:55.987 [2024-11-29 13:02:27.237816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.237830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.987 [2024-11-29 13:02:27.237838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.237846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.237853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.987 [2024-11-29 13:02:27.237900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1d40, cid 4, qid 0 00:15:55.987 [2024-11-29 13:02:27.237909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1ec0, cid 5, qid 0 00:15:55.987 [2024-11-29 13:02:27.238022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.987 [2024-11-29 13:02:27.238029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.987 [2024-11-29 13:02:27.238033] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e6d750): datao=0, datal=1024, cccid=4 00:15:55.987 [2024-11-29 13:02:27.238042] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ed1d40) on tqpair(0x1e6d750): expected_datao=0, payload_size=1024 00:15:55.987 [2024-11-29 13:02:27.238047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.238071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.238075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1ec0) on tqpair=0x1e6d750 00:15:55.987 [2024-11-29 13:02:27.238099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.238107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.238111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1d40) on tqpair=0x1e6d750 00:15:55.987 [2024-11-29 13:02:27.238129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.238142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.987 [2024-11-29 13:02:27.238168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1d40, cid 4, qid 0 00:15:55.987 [2024-11-29 13:02:27.238236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.987 [2024-11-29 13:02:27.238243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.987 [2024-11-29 13:02:27.238247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238251] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e6d750): datao=0, datal=3072, cccid=4 00:15:55.987 [2024-11-29 13:02:27.238256] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ed1d40) on tqpair(0x1e6d750): expected_datao=0, payload_size=3072 00:15:55.987 [2024-11-29 13:02:27.238261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238268] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.238288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.238292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1d40) on tqpair=0x1e6d750 00:15:55.987 [2024-11-29 13:02:27.238307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e6d750) 00:15:55.987 [2024-11-29 13:02:27.238319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.987 [2024-11-29 13:02:27.238344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1d40, cid 4, qid 0 00:15:55.987 [2024-11-29 13:02:27.238411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.987 [2024-11-29 13:02:27.238418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.987 [2024-11-29 13:02:27.238423] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e6d750): datao=0, datal=8, cccid=4 00:15:55.987 [2024-11-29 13:02:27.238431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ed1d40) on tqpair(0x1e6d750): expected_datao=0, payload_size=8 00:15:55.987 [2024-11-29 13:02:27.238436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.987 [2024-11-29 13:02:27.238472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.987 [2024-11-29 13:02:27.238476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.987 [2024-11-29 13:02:27.238480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1d40) on tqpair=0x1e6d750 00:15:55.987 ===================================================== 00:15:55.987 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:55.987 ===================================================== 00:15:55.987 Controller Capabilities/Features 00:15:55.987 ================================ 00:15:55.987 Vendor ID: 0000 00:15:55.987 Subsystem Vendor ID: 0000 00:15:55.987 Serial Number: .................... 00:15:55.987 Model Number: ........................................ 00:15:55.987 Firmware Version: 25.01 00:15:55.987 Recommended Arb Burst: 0 00:15:55.987 IEEE OUI Identifier: 00 00 00 00:15:55.987 Multi-path I/O 00:15:55.987 May have multiple subsystem ports: No 00:15:55.987 May have multiple controllers: No 00:15:55.987 Associated with SR-IOV VF: No 00:15:55.987 Max Data Transfer Size: 131072 00:15:55.987 Max Number of Namespaces: 0 00:15:55.987 Max Number of I/O Queues: 1024 00:15:55.987 NVMe Specification Version (VS): 1.3 00:15:55.987 NVMe Specification Version (Identify): 1.3 00:15:55.988 Maximum Queue Entries: 128 00:15:55.988 Contiguous Queues Required: Yes 00:15:55.988 Arbitration Mechanisms Supported 00:15:55.988 Weighted Round Robin: Not Supported 00:15:55.988 Vendor Specific: Not Supported 00:15:55.988 Reset Timeout: 15000 ms 00:15:55.988 Doorbell Stride: 4 bytes 00:15:55.988 NVM Subsystem Reset: Not Supported 00:15:55.988 Command Sets Supported 00:15:55.988 NVM Command Set: Supported 00:15:55.988 Boot Partition: Not Supported 00:15:55.988 Memory Page Size Minimum: 4096 bytes 00:15:55.988 Memory Page Size Maximum: 4096 bytes 00:15:55.988 Persistent Memory Region: Not Supported 00:15:55.988 Optional Asynchronous Events Supported 00:15:55.988 Namespace Attribute Notices: Not Supported 00:15:55.988 Firmware Activation Notices: Not Supported 00:15:55.988 ANA Change Notices: Not Supported 00:15:55.988 PLE Aggregate Log Change Notices: Not Supported 00:15:55.988 LBA Status Info Alert Notices: Not Supported 00:15:55.988 EGE Aggregate Log Change Notices: Not Supported 00:15:55.988 Normal NVM Subsystem Shutdown event: Not Supported 00:15:55.988 Zone Descriptor Change Notices: Not Supported 00:15:55.988 Discovery Log Change Notices: Supported 00:15:55.988 Controller Attributes 00:15:55.988 128-bit Host Identifier: Not Supported 00:15:55.988 Non-Operational Permissive Mode: Not Supported 00:15:55.988 NVM Sets: Not Supported 00:15:55.988 Read Recovery Levels: Not Supported 00:15:55.988 Endurance Groups: Not Supported 00:15:55.988 Predictable Latency Mode: Not Supported 00:15:55.988 Traffic Based Keep ALive: Not Supported 00:15:55.988 Namespace Granularity: Not Supported 00:15:55.988 SQ Associations: Not Supported 00:15:55.988 UUID List: Not Supported 00:15:55.988 Multi-Domain Subsystem: Not Supported 00:15:55.988 Fixed Capacity Management: Not Supported 00:15:55.988 Variable Capacity Management: Not Supported 00:15:55.988 Delete Endurance Group: Not Supported 00:15:55.988 Delete NVM Set: Not Supported 00:15:55.988 Extended LBA Formats Supported: Not Supported 00:15:55.988 Flexible Data Placement Supported: Not Supported 00:15:55.988 00:15:55.988 Controller Memory Buffer Support 00:15:55.988 ================================ 00:15:55.988 Supported: No 00:15:55.988 00:15:55.988 Persistent Memory Region Support 00:15:55.988 ================================ 00:15:55.988 Supported: No 00:15:55.988 00:15:55.988 Admin Command Set Attributes 00:15:55.988 ============================ 00:15:55.988 Security Send/Receive: Not Supported 00:15:55.988 Format NVM: Not Supported 00:15:55.988 Firmware Activate/Download: Not Supported 00:15:55.988 Namespace Management: Not Supported 00:15:55.988 Device Self-Test: Not Supported 00:15:55.988 Directives: Not Supported 00:15:55.988 NVMe-MI: Not Supported 00:15:55.988 Virtualization Management: Not Supported 00:15:55.988 Doorbell Buffer Config: Not Supported 00:15:55.988 Get LBA Status Capability: Not Supported 00:15:55.988 Command & Feature Lockdown Capability: Not Supported 00:15:55.988 Abort Command Limit: 1 00:15:55.988 Async Event Request Limit: 4 00:15:55.988 Number of Firmware Slots: N/A 00:15:55.988 Firmware Slot 1 Read-Only: N/A 00:15:55.988 Firmware Activation Without Reset: N/A 00:15:55.988 Multiple Update Detection Support: N/A 00:15:55.988 Firmware Update Granularity: No Information Provided 00:15:55.988 Per-Namespace SMART Log: No 00:15:55.988 Asymmetric Namespace Access Log Page: Not Supported 00:15:55.988 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:55.988 Command Effects Log Page: Not Supported 00:15:55.988 Get Log Page Extended Data: Supported 00:15:55.988 Telemetry Log Pages: Not Supported 00:15:55.988 Persistent Event Log Pages: Not Supported 00:15:55.988 Supported Log Pages Log Page: May Support 00:15:55.988 Commands Supported & Effects Log Page: Not Supported 00:15:55.988 Feature Identifiers & Effects Log Page:May Support 00:15:55.988 NVMe-MI Commands & Effects Log Page: May Support 00:15:55.988 Data Area 4 for Telemetry Log: Not Supported 00:15:55.988 Error Log Page Entries Supported: 128 00:15:55.988 Keep Alive: Not Supported 00:15:55.988 00:15:55.988 NVM Command Set Attributes 00:15:55.988 ========================== 00:15:55.988 Submission Queue Entry Size 00:15:55.988 Max: 1 00:15:55.988 Min: 1 00:15:55.988 Completion Queue Entry Size 00:15:55.988 Max: 1 00:15:55.988 Min: 1 00:15:55.988 Number of Namespaces: 0 00:15:55.988 Compare Command: Not Supported 00:15:55.988 Write Uncorrectable Command: Not Supported 00:15:55.988 Dataset Management Command: Not Supported 00:15:55.988 Write Zeroes Command: Not Supported 00:15:55.988 Set Features Save Field: Not Supported 00:15:55.988 Reservations: Not Supported 00:15:55.988 Timestamp: Not Supported 00:15:55.988 Copy: Not Supported 00:15:55.988 Volatile Write Cache: Not Present 00:15:55.988 Atomic Write Unit (Normal): 1 00:15:55.988 Atomic Write Unit (PFail): 1 00:15:55.988 Atomic Compare & Write Unit: 1 00:15:55.988 Fused Compare & Write: Supported 00:15:55.988 Scatter-Gather List 00:15:55.988 SGL Command Set: Supported 00:15:55.988 SGL Keyed: Supported 00:15:55.988 SGL Bit Bucket Descriptor: Not Supported 00:15:55.988 SGL Metadata Pointer: Not Supported 00:15:55.988 Oversized SGL: Not Supported 00:15:55.988 SGL Metadata Address: Not Supported 00:15:55.988 SGL Offset: Supported 00:15:55.988 Transport SGL Data Block: Not Supported 00:15:55.988 Replay Protected Memory Block: Not Supported 00:15:55.988 00:15:55.988 Firmware Slot Information 00:15:55.988 ========================= 00:15:55.988 Active slot: 0 00:15:55.988 00:15:55.988 00:15:55.988 Error Log 00:15:55.988 ========= 00:15:55.988 00:15:55.988 Active Namespaces 00:15:55.988 ================= 00:15:55.988 Discovery Log Page 00:15:55.988 ================== 00:15:55.988 Generation Counter: 2 00:15:55.988 Number of Records: 2 00:15:55.988 Record Format: 0 00:15:55.988 00:15:55.988 Discovery Log Entry 0 00:15:55.988 ---------------------- 00:15:55.988 Transport Type: 3 (TCP) 00:15:55.988 Address Family: 1 (IPv4) 00:15:55.988 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:55.988 Entry Flags: 00:15:55.988 Duplicate Returned Information: 1 00:15:55.988 Explicit Persistent Connection Support for Discovery: 1 00:15:55.988 Transport Requirements: 00:15:55.988 Secure Channel: Not Required 00:15:55.988 Port ID: 0 (0x0000) 00:15:55.988 Controller ID: 65535 (0xffff) 00:15:55.988 Admin Max SQ Size: 128 00:15:55.988 Transport Service Identifier: 4420 00:15:55.988 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:55.988 Transport Address: 10.0.0.3 00:15:55.988 Discovery Log Entry 1 00:15:55.988 ---------------------- 00:15:55.988 Transport Type: 3 (TCP) 00:15:55.988 Address Family: 1 (IPv4) 00:15:55.988 Subsystem Type: 2 (NVM Subsystem) 00:15:55.988 Entry Flags: 00:15:55.988 Duplicate Returned Information: 0 00:15:55.988 Explicit Persistent Connection Support for Discovery: 0 00:15:55.988 Transport Requirements: 00:15:55.988 Secure Channel: Not Required 00:15:55.988 Port ID: 0 (0x0000) 00:15:55.988 Controller ID: 65535 (0xffff) 00:15:55.988 Admin Max SQ Size: 128 00:15:55.988 Transport Service Identifier: 4420 00:15:55.988 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:55.988 Transport Address: 10.0.0.3 [2024-11-29 13:02:27.238600] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:55.988 [2024-11-29 13:02:27.238619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1740) on tqpair=0x1e6d750 00:15:55.988 [2024-11-29 13:02:27.238627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.988 [2024-11-29 13:02:27.238634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed18c0) on tqpair=0x1e6d750 00:15:55.988 [2024-11-29 13:02:27.238639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.988 [2024-11-29 13:02:27.238644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1a40) on tqpair=0x1e6d750 00:15:55.988 [2024-11-29 13:02:27.238649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.988 [2024-11-29 13:02:27.238654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.988 [2024-11-29 13:02:27.238659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.988 [2024-11-29 13:02:27.238674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.988 [2024-11-29 13:02:27.238679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.988 [2024-11-29 13:02:27.238683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.988 [2024-11-29 13:02:27.238692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.238719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.238778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.238786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.238790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.238803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.238819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.238842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.238925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.238934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.238938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.238948] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:55.989 [2024-11-29 13:02:27.238954] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:55.989 [2024-11-29 13:02:27.238965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.238974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.238982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.239782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.239800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.239844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.239852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.239856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.239870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.239876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.245942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e6d750) 00:15:55.989 [2024-11-29 13:02:27.245956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.989 [2024-11-29 13:02:27.245985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ed1bc0, cid 3, qid 0 00:15:55.989 [2024-11-29 13:02:27.246035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.989 [2024-11-29 13:02:27.246043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.989 [2024-11-29 13:02:27.246047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.989 [2024-11-29 13:02:27.246052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ed1bc0) on tqpair=0x1e6d750 00:15:55.989 [2024-11-29 13:02:27.246061] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:15:55.989 00:15:55.989 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:55.989 [2024-11-29 13:02:27.289169] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:55.989 [2024-11-29 13:02:27.289229] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74282 ] 00:15:55.990 [2024-11-29 13:02:27.451139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:55.990 [2024-11-29 13:02:27.451211] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:55.990 [2024-11-29 13:02:27.451219] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:55.990 [2024-11-29 13:02:27.451236] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:55.990 [2024-11-29 13:02:27.451250] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:55.990 [2024-11-29 13:02:27.451557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:55.990 [2024-11-29 13:02:27.451621] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x234e750 0 00:15:55.990 [2024-11-29 13:02:27.465996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:55.990 [2024-11-29 13:02:27.466022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:55.990 [2024-11-29 13:02:27.466029] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:55.990 [2024-11-29 13:02:27.466033] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:55.990 [2024-11-29 13:02:27.466071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.466093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.466097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.466111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:55.990 [2024-11-29 13:02:27.466143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.473928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.473951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.473972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.473977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.473988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:55.990 [2024-11-29 13:02:27.473996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:55.990 [2024-11-29 13:02:27.474003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:55.990 [2024-11-29 13:02:27.474022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.474041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.990 [2024-11-29 13:02:27.474069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.474125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.474132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.474136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.474146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:55.990 [2024-11-29 13:02:27.474153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:55.990 [2024-11-29 13:02:27.474161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.474209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.990 [2024-11-29 13:02:27.474229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.474280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.474287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.474291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.474301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:55.990 [2024-11-29 13:02:27.474310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.474334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.990 [2024-11-29 13:02:27.474352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.474398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.474405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.474409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.474420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.474448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.990 [2024-11-29 13:02:27.474465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.474510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.474517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.474521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.474530] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:55.990 [2024-11-29 13:02:27.474536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474656] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:55.990 [2024-11-29 13:02:27.474662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.990 [2024-11-29 13:02:27.474689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.990 [2024-11-29 13:02:27.474709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.990 [2024-11-29 13:02:27.474753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.990 [2024-11-29 13:02:27.474760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.990 [2024-11-29 13:02:27.474764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.990 [2024-11-29 13:02:27.474774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:55.990 [2024-11-29 13:02:27.474784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.990 [2024-11-29 13:02:27.474793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.474801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.991 [2024-11-29 13:02:27.474818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.991 [2024-11-29 13:02:27.474871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.991 [2024-11-29 13:02:27.474878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.991 [2024-11-29 13:02:27.474882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.474886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.991 [2024-11-29 13:02:27.474891] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:55.991 [2024-11-29 13:02:27.474897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.474905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:55.991 [2024-11-29 13:02:27.474917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.474952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.474958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.474967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.991 [2024-11-29 13:02:27.474988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.991 [2024-11-29 13:02:27.475088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.991 [2024-11-29 13:02:27.475107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.991 [2024-11-29 13:02:27.475118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475122] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=4096, cccid=0 00:15:55.991 [2024-11-29 13:02:27.475127] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2740) on tqpair(0x234e750): expected_datao=0, payload_size=4096 00:15:55.991 [2024-11-29 13:02:27.475132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475141] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.991 [2024-11-29 13:02:27.475162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.991 [2024-11-29 13:02:27.475165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.991 [2024-11-29 13:02:27.475179] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:55.991 [2024-11-29 13:02:27.475184] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:55.991 [2024-11-29 13:02:27.475189] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:55.991 [2024-11-29 13:02:27.475199] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:55.991 [2024-11-29 13:02:27.475205] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:55.991 [2024-11-29 13:02:27.475210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.991 [2024-11-29 13:02:27.475266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.991 [2024-11-29 13:02:27.475321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.991 [2024-11-29 13:02:27.475328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.991 [2024-11-29 13:02:27.475332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.991 [2024-11-29 13:02:27.475345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.991 [2024-11-29 13:02:27.475367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.991 [2024-11-29 13:02:27.475387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.991 [2024-11-29 13:02:27.475407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.991 [2024-11-29 13:02:27.475427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.991 [2024-11-29 13:02:27.475490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2740, cid 0, qid 0 00:15:55.991 [2024-11-29 13:02:27.475498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b28c0, cid 1, qid 0 00:15:55.991 [2024-11-29 13:02:27.475503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2a40, cid 2, qid 0 00:15:55.991 [2024-11-29 13:02:27.475508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.991 [2024-11-29 13:02:27.475513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.991 [2024-11-29 13:02:27.475600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.991 [2024-11-29 13:02:27.475607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.991 [2024-11-29 13:02:27.475611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.991 [2024-11-29 13:02:27.475621] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:55.991 [2024-11-29 13:02:27.475626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.991 [2024-11-29 13:02:27.475698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.991 [2024-11-29 13:02:27.475748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.991 [2024-11-29 13:02:27.475754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.991 [2024-11-29 13:02:27.475758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.991 [2024-11-29 13:02:27.475828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:55.991 [2024-11-29 13:02:27.475848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.991 [2024-11-29 13:02:27.475853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.991 [2024-11-29 13:02:27.475860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.991 [2024-11-29 13:02:27.475879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.991 [2024-11-29 13:02:27.475951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.991 [2024-11-29 13:02:27.475960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.991 [2024-11-29 13:02:27.475964] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.475968] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=4096, cccid=4 00:15:55.992 [2024-11-29 13:02:27.475972] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2d40) on tqpair(0x234e750): expected_datao=0, payload_size=4096 00:15:55.992 [2024-11-29 13:02:27.475977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.475984] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.475988] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.475997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476022] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:55.992 [2024-11-29 13:02:27.476035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.992 [2024-11-29 13:02:27.476161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.992 [2024-11-29 13:02:27.476168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.992 [2024-11-29 13:02:27.476172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=4096, cccid=4 00:15:55.992 [2024-11-29 13:02:27.476180] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2d40) on tqpair(0x234e750): expected_datao=0, payload_size=4096 00:15:55.992 [2024-11-29 13:02:27.476185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476192] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476196] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.992 [2024-11-29 13:02:27.476347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.992 [2024-11-29 13:02:27.476354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.992 [2024-11-29 13:02:27.476358] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476362] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=4096, cccid=4 00:15:55.992 [2024-11-29 13:02:27.476366] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2d40) on tqpair(0x234e750): expected_datao=0, payload_size=4096 00:15:55.992 [2024-11-29 13:02:27.476371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476378] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476382] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476457] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:55.992 [2024-11-29 13:02:27.476462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:55.992 [2024-11-29 13:02:27.476467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:55.992 [2024-11-29 13:02:27.476484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.992 [2024-11-29 13:02:27.476560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.992 [2024-11-29 13:02:27.476568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2ec0, cid 5, qid 0 00:15:55.992 [2024-11-29 13:02:27.476630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2ec0) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2ec0, cid 5, qid 0 00:15:55.992 [2024-11-29 13:02:27.476750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2ec0) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2ec0, cid 5, qid 0 00:15:55.992 [2024-11-29 13:02:27.476849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.992 [2024-11-29 13:02:27.476860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2ec0) on tqpair=0x234e750 00:15:55.992 [2024-11-29 13:02:27.476875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.992 [2024-11-29 13:02:27.476880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x234e750) 00:15:55.992 [2024-11-29 13:02:27.476887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.992 [2024-11-29 13:02:27.476919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2ec0, cid 5, qid 0 00:15:55.992 [2024-11-29 13:02:27.476967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.992 [2024-11-29 13:02:27.476974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.993 [2024-11-29 13:02:27.476978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.476982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2ec0) on tqpair=0x234e750 00:15:55.993 [2024-11-29 13:02:27.477002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x234e750) 00:15:55.993 [2024-11-29 13:02:27.477016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.993 [2024-11-29 13:02:27.477024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x234e750) 00:15:55.993 [2024-11-29 13:02:27.477035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.993 [2024-11-29 13:02:27.477042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x234e750) 00:15:55.993 [2024-11-29 13:02:27.477053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.993 [2024-11-29 13:02:27.477061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x234e750) 00:15:55.993 [2024-11-29 13:02:27.477072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.993 [2024-11-29 13:02:27.477093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2ec0, cid 5, qid 0 00:15:55.993 [2024-11-29 13:02:27.477100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2d40, cid 4, qid 0 00:15:55.993 [2024-11-29 13:02:27.477105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b3040, cid 6, qid 0 00:15:55.993 [2024-11-29 13:02:27.477110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b31c0, cid 7, qid 0 00:15:55.993 [2024-11-29 13:02:27.477246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.993 [2024-11-29 13:02:27.477253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.993 [2024-11-29 13:02:27.477257] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477261] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=8192, cccid=5 00:15:55.993 [2024-11-29 13:02:27.477266] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2ec0) on tqpair(0x234e750): expected_datao=0, payload_size=8192 00:15:55.993 [2024-11-29 13:02:27.477271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.993 [2024-11-29 13:02:27.477305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.993 [2024-11-29 13:02:27.477309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=512, cccid=4 00:15:55.993 [2024-11-29 13:02:27.477317] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b2d40) on tqpair(0x234e750): expected_datao=0, payload_size=512 00:15:55.993 [2024-11-29 13:02:27.477322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477328] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477332] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.993 [2024-11-29 13:02:27.477344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.993 [2024-11-29 13:02:27.477348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477352] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=512, cccid=6 00:15:55.993 [2024-11-29 13:02:27.477357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b3040) on tqpair(0x234e750): expected_datao=0, payload_size=512 00:15:55.993 [2024-11-29 13:02:27.477361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477368] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:55.993 [2024-11-29 13:02:27.477383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:55.993 [2024-11-29 13:02:27.477387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477391] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x234e750): datao=0, datal=4096, cccid=7 00:15:55.993 [2024-11-29 13:02:27.477395] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b31c0) on tqpair(0x234e750): expected_datao=0, payload_size=4096 00:15:55.993 [2024-11-29 13:02:27.477400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477411] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.993 [2024-11-29 13:02:27.477425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.993 [2024-11-29 13:02:27.477429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2ec0) on tqpair=0x234e750 00:15:55.993 [2024-11-29 13:02:27.477449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.993 [2024-11-29 13:02:27.477456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.993 [2024-11-29 13:02:27.477460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2d40) on tqpair=0x234e750 00:15:55.993 [2024-11-29 13:02:27.477478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.993 [2024-11-29 13:02:27.477484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.993 [2024-11-29 13:02:27.477488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b3040) on tqpair=0x234e750 00:15:55.993 [2024-11-29 13:02:27.477499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.993 [2024-11-29 13:02:27.477506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.993 [2024-11-29 13:02:27.477509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.993 [2024-11-29 13:02:27.477513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b31c0) on tqpair=0x234e750 00:15:55.993 ===================================================== 00:15:55.993 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.993 ===================================================== 00:15:55.993 Controller Capabilities/Features 00:15:55.993 ================================ 00:15:55.993 Vendor ID: 8086 00:15:55.993 Subsystem Vendor ID: 8086 00:15:55.993 Serial Number: SPDK00000000000001 00:15:55.993 Model Number: SPDK bdev Controller 00:15:55.993 Firmware Version: 25.01 00:15:55.993 Recommended Arb Burst: 6 00:15:55.993 IEEE OUI Identifier: e4 d2 5c 00:15:55.993 Multi-path I/O 00:15:55.993 May have multiple subsystem ports: Yes 00:15:55.993 May have multiple controllers: Yes 00:15:55.993 Associated with SR-IOV VF: No 00:15:55.993 Max Data Transfer Size: 131072 00:15:55.993 Max Number of Namespaces: 32 00:15:55.993 Max Number of I/O Queues: 127 00:15:55.993 NVMe Specification Version (VS): 1.3 00:15:55.993 NVMe Specification Version (Identify): 1.3 00:15:55.993 Maximum Queue Entries: 128 00:15:55.993 Contiguous Queues Required: Yes 00:15:55.993 Arbitration Mechanisms Supported 00:15:55.993 Weighted Round Robin: Not Supported 00:15:55.993 Vendor Specific: Not Supported 00:15:55.993 Reset Timeout: 15000 ms 00:15:55.993 Doorbell Stride: 4 bytes 00:15:55.993 NVM Subsystem Reset: Not Supported 00:15:55.993 Command Sets Supported 00:15:55.993 NVM Command Set: Supported 00:15:55.993 Boot Partition: Not Supported 00:15:55.993 Memory Page Size Minimum: 4096 bytes 00:15:55.993 Memory Page Size Maximum: 4096 bytes 00:15:55.993 Persistent Memory Region: Not Supported 00:15:55.993 Optional Asynchronous Events Supported 00:15:55.993 Namespace Attribute Notices: Supported 00:15:55.993 Firmware Activation Notices: Not Supported 00:15:55.993 ANA Change Notices: Not Supported 00:15:55.993 PLE Aggregate Log Change Notices: Not Supported 00:15:55.993 LBA Status Info Alert Notices: Not Supported 00:15:55.993 EGE Aggregate Log Change Notices: Not Supported 00:15:55.993 Normal NVM Subsystem Shutdown event: Not Supported 00:15:55.993 Zone Descriptor Change Notices: Not Supported 00:15:55.993 Discovery Log Change Notices: Not Supported 00:15:55.993 Controller Attributes 00:15:55.993 128-bit Host Identifier: Supported 00:15:55.993 Non-Operational Permissive Mode: Not Supported 00:15:55.993 NVM Sets: Not Supported 00:15:55.993 Read Recovery Levels: Not Supported 00:15:55.993 Endurance Groups: Not Supported 00:15:55.993 Predictable Latency Mode: Not Supported 00:15:55.993 Traffic Based Keep ALive: Not Supported 00:15:55.993 Namespace Granularity: Not Supported 00:15:55.993 SQ Associations: Not Supported 00:15:55.993 UUID List: Not Supported 00:15:55.993 Multi-Domain Subsystem: Not Supported 00:15:55.993 Fixed Capacity Management: Not Supported 00:15:55.993 Variable Capacity Management: Not Supported 00:15:55.993 Delete Endurance Group: Not Supported 00:15:55.993 Delete NVM Set: Not Supported 00:15:55.993 Extended LBA Formats Supported: Not Supported 00:15:55.994 Flexible Data Placement Supported: Not Supported 00:15:55.994 00:15:55.994 Controller Memory Buffer Support 00:15:55.994 ================================ 00:15:55.994 Supported: No 00:15:55.994 00:15:55.994 Persistent Memory Region Support 00:15:55.994 ================================ 00:15:55.994 Supported: No 00:15:55.994 00:15:55.994 Admin Command Set Attributes 00:15:55.994 ============================ 00:15:55.994 Security Send/Receive: Not Supported 00:15:55.994 Format NVM: Not Supported 00:15:55.994 Firmware Activate/Download: Not Supported 00:15:55.994 Namespace Management: Not Supported 00:15:55.994 Device Self-Test: Not Supported 00:15:55.994 Directives: Not Supported 00:15:55.994 NVMe-MI: Not Supported 00:15:55.994 Virtualization Management: Not Supported 00:15:55.994 Doorbell Buffer Config: Not Supported 00:15:55.994 Get LBA Status Capability: Not Supported 00:15:55.994 Command & Feature Lockdown Capability: Not Supported 00:15:55.994 Abort Command Limit: 4 00:15:55.994 Async Event Request Limit: 4 00:15:55.994 Number of Firmware Slots: N/A 00:15:55.994 Firmware Slot 1 Read-Only: N/A 00:15:55.994 Firmware Activation Without Reset: N/A 00:15:55.994 Multiple Update Detection Support: N/A 00:15:55.994 Firmware Update Granularity: No Information Provided 00:15:55.994 Per-Namespace SMART Log: No 00:15:55.994 Asymmetric Namespace Access Log Page: Not Supported 00:15:55.994 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:55.994 Command Effects Log Page: Supported 00:15:55.994 Get Log Page Extended Data: Supported 00:15:55.994 Telemetry Log Pages: Not Supported 00:15:55.994 Persistent Event Log Pages: Not Supported 00:15:55.994 Supported Log Pages Log Page: May Support 00:15:55.994 Commands Supported & Effects Log Page: Not Supported 00:15:55.994 Feature Identifiers & Effects Log Page:May Support 00:15:55.994 NVMe-MI Commands & Effects Log Page: May Support 00:15:55.994 Data Area 4 for Telemetry Log: Not Supported 00:15:55.994 Error Log Page Entries Supported: 128 00:15:55.994 Keep Alive: Supported 00:15:55.994 Keep Alive Granularity: 10000 ms 00:15:55.994 00:15:55.994 NVM Command Set Attributes 00:15:55.994 ========================== 00:15:55.994 Submission Queue Entry Size 00:15:55.994 Max: 64 00:15:55.994 Min: 64 00:15:55.994 Completion Queue Entry Size 00:15:55.994 Max: 16 00:15:55.994 Min: 16 00:15:55.994 Number of Namespaces: 32 00:15:55.994 Compare Command: Supported 00:15:55.994 Write Uncorrectable Command: Not Supported 00:15:55.994 Dataset Management Command: Supported 00:15:55.994 Write Zeroes Command: Supported 00:15:55.994 Set Features Save Field: Not Supported 00:15:55.994 Reservations: Supported 00:15:55.994 Timestamp: Not Supported 00:15:55.994 Copy: Supported 00:15:55.994 Volatile Write Cache: Present 00:15:55.994 Atomic Write Unit (Normal): 1 00:15:55.994 Atomic Write Unit (PFail): 1 00:15:55.994 Atomic Compare & Write Unit: 1 00:15:55.994 Fused Compare & Write: Supported 00:15:55.994 Scatter-Gather List 00:15:55.994 SGL Command Set: Supported 00:15:55.994 SGL Keyed: Supported 00:15:55.994 SGL Bit Bucket Descriptor: Not Supported 00:15:55.994 SGL Metadata Pointer: Not Supported 00:15:55.994 Oversized SGL: Not Supported 00:15:55.994 SGL Metadata Address: Not Supported 00:15:55.994 SGL Offset: Supported 00:15:55.994 Transport SGL Data Block: Not Supported 00:15:55.994 Replay Protected Memory Block: Not Supported 00:15:55.994 00:15:55.994 Firmware Slot Information 00:15:55.994 ========================= 00:15:55.994 Active slot: 1 00:15:55.994 Slot 1 Firmware Revision: 25.01 00:15:55.994 00:15:55.994 00:15:55.994 Commands Supported and Effects 00:15:55.994 ============================== 00:15:55.994 Admin Commands 00:15:55.994 -------------- 00:15:55.994 Get Log Page (02h): Supported 00:15:55.994 Identify (06h): Supported 00:15:55.994 Abort (08h): Supported 00:15:55.994 Set Features (09h): Supported 00:15:55.994 Get Features (0Ah): Supported 00:15:55.994 Asynchronous Event Request (0Ch): Supported 00:15:55.994 Keep Alive (18h): Supported 00:15:55.994 I/O Commands 00:15:55.994 ------------ 00:15:55.994 Flush (00h): Supported LBA-Change 00:15:55.994 Write (01h): Supported LBA-Change 00:15:55.994 Read (02h): Supported 00:15:55.994 Compare (05h): Supported 00:15:55.994 Write Zeroes (08h): Supported LBA-Change 00:15:55.994 Dataset Management (09h): Supported LBA-Change 00:15:55.994 Copy (19h): Supported LBA-Change 00:15:55.994 00:15:55.994 Error Log 00:15:55.994 ========= 00:15:55.994 00:15:55.994 Arbitration 00:15:55.994 =========== 00:15:55.994 Arbitration Burst: 1 00:15:55.994 00:15:55.994 Power Management 00:15:55.994 ================ 00:15:55.994 Number of Power States: 1 00:15:55.994 Current Power State: Power State #0 00:15:55.994 Power State #0: 00:15:55.994 Max Power: 0.00 W 00:15:55.994 Non-Operational State: Operational 00:15:55.994 Entry Latency: Not Reported 00:15:55.994 Exit Latency: Not Reported 00:15:55.994 Relative Read Throughput: 0 00:15:55.994 Relative Read Latency: 0 00:15:55.994 Relative Write Throughput: 0 00:15:55.994 Relative Write Latency: 0 00:15:55.994 Idle Power: Not Reported 00:15:55.994 Active Power: Not Reported 00:15:55.994 Non-Operational Permissive Mode: Not Supported 00:15:55.994 00:15:55.994 Health Information 00:15:55.994 ================== 00:15:55.994 Critical Warnings: 00:15:55.994 Available Spare Space: OK 00:15:55.994 Temperature: OK 00:15:55.994 Device Reliability: OK 00:15:55.994 Read Only: No 00:15:55.994 Volatile Memory Backup: OK 00:15:55.994 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:55.994 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:55.994 Available Spare: 0% 00:15:55.994 Available Spare Threshold: 0% 00:15:55.994 Life Percentage Used:[2024-11-29 13:02:27.477622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.477629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x234e750) 00:15:55.994 [2024-11-29 13:02:27.477637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.994 [2024-11-29 13:02:27.477661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b31c0, cid 7, qid 0 00:15:55.994 [2024-11-29 13:02:27.477709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.994 [2024-11-29 13:02:27.477716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.994 [2024-11-29 13:02:27.477720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.477724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b31c0) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.477765] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:55.994 [2024-11-29 13:02:27.477778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2740) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.477785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.994 [2024-11-29 13:02:27.477791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b28c0) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.477796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.994 [2024-11-29 13:02:27.477801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2a40) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.477806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.994 [2024-11-29 13:02:27.477811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.477816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.994 [2024-11-29 13:02:27.477826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.477830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.477834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.994 [2024-11-29 13:02:27.477842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.994 [2024-11-29 13:02:27.477865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.994 [2024-11-29 13:02:27.481896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.994 [2024-11-29 13:02:27.481916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.994 [2024-11-29 13:02:27.481921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.481926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.994 [2024-11-29 13:02:27.481936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.481941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.994 [2024-11-29 13:02:27.481946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.994 [2024-11-29 13:02:27.481955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.481987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482079] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:55.995 [2024-11-29 13:02:27.482084] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:55.995 [2024-11-29 13:02:27.482095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.482860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.995 [2024-11-29 13:02:27.482876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.995 [2024-11-29 13:02:27.482926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.995 [2024-11-29 13:02:27.482976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.995 [2024-11-29 13:02:27.482983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.995 [2024-11-29 13:02:27.482986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.482991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.995 [2024-11-29 13:02:27.483002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.483007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.995 [2024-11-29 13:02:27.483011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.483899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.483918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.483964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.483972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.483975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.483990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.483999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.484006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.484023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.484071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.484078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.484081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.484096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.484112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.484129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.484182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.484189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.484193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.484208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.996 [2024-11-29 13:02:27.484224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.996 [2024-11-29 13:02:27.484240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.996 [2024-11-29 13:02:27.484285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.996 [2024-11-29 13:02:27.484292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.996 [2024-11-29 13:02:27.484296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.996 [2024-11-29 13:02:27.484311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.996 [2024-11-29 13:02:27.484315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.484868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.484899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.484949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.484967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.484972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.484988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.484997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.485099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.485212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.485318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.485427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.997 [2024-11-29 13:02:27.485531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.997 [2024-11-29 13:02:27.485547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.997 [2024-11-29 13:02:27.485565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.997 [2024-11-29 13:02:27.485618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.997 [2024-11-29 13:02:27.485625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.997 [2024-11-29 13:02:27.485629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.997 [2024-11-29 13:02:27.485633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.998 [2024-11-29 13:02:27.485643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.998 [2024-11-29 13:02:27.485659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.998 [2024-11-29 13:02:27.485676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.998 [2024-11-29 13:02:27.485724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.998 [2024-11-29 13:02:27.485731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.998 [2024-11-29 13:02:27.485735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.998 [2024-11-29 13:02:27.485749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.998 [2024-11-29 13:02:27.485765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.998 [2024-11-29 13:02:27.485782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.998 [2024-11-29 13:02:27.485826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.998 [2024-11-29 13:02:27.485833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.998 [2024-11-29 13:02:27.485837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.998 [2024-11-29 13:02:27.485852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.485860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.998 [2024-11-29 13:02:27.485867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.998 [2024-11-29 13:02:27.488950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.998 [2024-11-29 13:02:27.488979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.998 [2024-11-29 13:02:27.488988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.998 [2024-11-29 13:02:27.488992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.488996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.998 [2024-11-29 13:02:27.489011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.489016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.489020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x234e750) 00:15:55.998 [2024-11-29 13:02:27.489029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.998 [2024-11-29 13:02:27.489054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b2bc0, cid 3, qid 0 00:15:55.998 [2024-11-29 13:02:27.489110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:55.998 [2024-11-29 13:02:27.489117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:55.998 [2024-11-29 13:02:27.489121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:55.998 [2024-11-29 13:02:27.489125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b2bc0) on tqpair=0x234e750 00:15:55.998 [2024-11-29 13:02:27.489134] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:15:56.257 0% 00:15:56.257 Data Units Read: 0 00:15:56.257 Data Units Written: 0 00:15:56.257 Host Read Commands: 0 00:15:56.257 Host Write Commands: 0 00:15:56.257 Controller Busy Time: 0 minutes 00:15:56.257 Power Cycles: 0 00:15:56.257 Power On Hours: 0 hours 00:15:56.257 Unsafe Shutdowns: 0 00:15:56.257 Unrecoverable Media Errors: 0 00:15:56.257 Lifetime Error Log Entries: 0 00:15:56.257 Warning Temperature Time: 0 minutes 00:15:56.257 Critical Temperature Time: 0 minutes 00:15:56.257 00:15:56.257 Number of Queues 00:15:56.257 ================ 00:15:56.257 Number of I/O Submission Queues: 127 00:15:56.257 Number of I/O Completion Queues: 127 00:15:56.257 00:15:56.257 Active Namespaces 00:15:56.257 ================= 00:15:56.257 Namespace ID:1 00:15:56.257 Error Recovery Timeout: Unlimited 00:15:56.257 Command Set Identifier: NVM (00h) 00:15:56.257 Deallocate: Supported 00:15:56.257 Deallocated/Unwritten Error: Not Supported 00:15:56.257 Deallocated Read Value: Unknown 00:15:56.257 Deallocate in Write Zeroes: Not Supported 00:15:56.257 Deallocated Guard Field: 0xFFFF 00:15:56.257 Flush: Supported 00:15:56.257 Reservation: Supported 00:15:56.257 Namespace Sharing Capabilities: Multiple Controllers 00:15:56.257 Size (in LBAs): 131072 (0GiB) 00:15:56.257 Capacity (in LBAs): 131072 (0GiB) 00:15:56.257 Utilization (in LBAs): 131072 (0GiB) 00:15:56.258 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:56.258 EUI64: ABCDEF0123456789 00:15:56.258 UUID: 2ab01bc3-5724-468f-b452-646708cd0ffb 00:15:56.258 Thin Provisioning: Not Supported 00:15:56.258 Per-NS Atomic Units: Yes 00:15:56.258 Atomic Boundary Size (Normal): 0 00:15:56.258 Atomic Boundary Size (PFail): 0 00:15:56.258 Atomic Boundary Offset: 0 00:15:56.258 Maximum Single Source Range Length: 65535 00:15:56.258 Maximum Copy Length: 65535 00:15:56.258 Maximum Source Range Count: 1 00:15:56.258 NGUID/EUI64 Never Reused: No 00:15:56.258 Namespace Write Protected: No 00:15:56.258 Number of LBA Formats: 1 00:15:56.258 Current LBA Format: LBA Format #00 00:15:56.258 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:56.258 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.258 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.258 rmmod nvme_tcp 00:15:56.258 rmmod nvme_fabrics 00:15:56.517 rmmod nvme_keyring 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74247 ']' 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74247 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74247 ']' 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74247 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74247 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.517 killing process with pid 74247 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74247' 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74247 00:15:56.517 13:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74247 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.787 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:57.076 00:15:57.076 real 0m2.519s 00:15:57.076 user 0m5.551s 00:15:57.076 sys 0m0.764s 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:57.076 ************************************ 00:15:57.076 END TEST nvmf_identify 00:15:57.076 ************************************ 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.076 ************************************ 00:15:57.076 START TEST nvmf_perf 00:15:57.076 ************************************ 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:57.076 * Looking for test storage... 00:15:57.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:57.076 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.077 --rc genhtml_branch_coverage=1 00:15:57.077 --rc genhtml_function_coverage=1 00:15:57.077 --rc genhtml_legend=1 00:15:57.077 --rc geninfo_all_blocks=1 00:15:57.077 --rc geninfo_unexecuted_blocks=1 00:15:57.077 00:15:57.077 ' 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.077 --rc genhtml_branch_coverage=1 00:15:57.077 --rc genhtml_function_coverage=1 00:15:57.077 --rc genhtml_legend=1 00:15:57.077 --rc geninfo_all_blocks=1 00:15:57.077 --rc geninfo_unexecuted_blocks=1 00:15:57.077 00:15:57.077 ' 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.077 --rc genhtml_branch_coverage=1 00:15:57.077 --rc genhtml_function_coverage=1 00:15:57.077 --rc genhtml_legend=1 00:15:57.077 --rc geninfo_all_blocks=1 00:15:57.077 --rc geninfo_unexecuted_blocks=1 00:15:57.077 00:15:57.077 ' 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.077 --rc genhtml_branch_coverage=1 00:15:57.077 --rc genhtml_function_coverage=1 00:15:57.077 --rc genhtml_legend=1 00:15:57.077 --rc geninfo_all_blocks=1 00:15:57.077 --rc geninfo_unexecuted_blocks=1 00:15:57.077 00:15:57.077 ' 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.077 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.337 Cannot find device "nvmf_init_br" 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.337 Cannot find device "nvmf_init_br2" 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.337 Cannot find device "nvmf_tgt_br" 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.337 Cannot find device "nvmf_tgt_br2" 00:15:57.337 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.338 Cannot find device "nvmf_init_br" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.338 Cannot find device "nvmf_init_br2" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.338 Cannot find device "nvmf_tgt_br" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.338 Cannot find device "nvmf_tgt_br2" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.338 Cannot find device "nvmf_br" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.338 Cannot find device "nvmf_init_if" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.338 Cannot find device "nvmf_init_if2" 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.338 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:57.597 13:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:57.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:57.597 00:15:57.597 --- 10.0.0.3 ping statistics --- 00:15:57.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.597 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:57.597 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:57.597 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:57.597 00:15:57.597 --- 10.0.0.4 ping statistics --- 00:15:57.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.597 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:57.597 00:15:57.597 --- 10.0.0.1 ping statistics --- 00:15:57.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.597 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:57.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:57.597 00:15:57.597 --- 10.0.0.2 ping statistics --- 00:15:57.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.597 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74499 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74499 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74499 ']' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.597 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 [2024-11-29 13:02:29.101695] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:15:57.597 [2024-11-29 13:02:29.101817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.856 [2024-11-29 13:02:29.249499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.856 [2024-11-29 13:02:29.313945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.856 [2024-11-29 13:02:29.314403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.856 [2024-11-29 13:02:29.314646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.856 [2024-11-29 13:02:29.314912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.856 [2024-11-29 13:02:29.315210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.856 [2024-11-29 13:02:29.316640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.856 [2024-11-29 13:02:29.316775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.856 [2024-11-29 13:02:29.316852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.856 [2024-11-29 13:02:29.316853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.114 [2024-11-29 13:02:29.377707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:58.114 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:58.678 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:58.678 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:58.935 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:58.935 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.193 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:59.193 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:59.193 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:59.193 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:59.193 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:59.452 [2024-11-29 13:02:30.882789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.452 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:59.710 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:59.710 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:59.969 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:59.969 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:00.226 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:00.484 [2024-11-29 13:02:31.884957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:00.484 13:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:00.742 13:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:00.742 13:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:00.742 13:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:00.742 13:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:02.118 Initializing NVMe Controllers 00:16:02.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:02.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:02.118 Initialization complete. Launching workers. 00:16:02.118 ======================================================== 00:16:02.118 Latency(us) 00:16:02.118 Device Information : IOPS MiB/s Average min max 00:16:02.118 PCIE (0000:00:10.0) NSID 1 from core 0: 22495.98 87.87 1421.93 371.77 8164.00 00:16:02.118 ======================================================== 00:16:02.118 Total : 22495.98 87.87 1421.93 371.77 8164.00 00:16:02.118 00:16:02.118 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:03.052 Initializing NVMe Controllers 00:16:03.052 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:03.052 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:03.052 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:03.052 Initialization complete. Launching workers. 00:16:03.052 ======================================================== 00:16:03.052 Latency(us) 00:16:03.052 Device Information : IOPS MiB/s Average min max 00:16:03.052 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3803.99 14.86 262.44 94.70 7223.31 00:16:03.052 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8119.34 4997.85 12006.69 00:16:03.052 ======================================================== 00:16:03.052 Total : 3927.99 15.34 510.46 94.70 12006.69 00:16:03.052 00:16:03.318 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:04.698 Initializing NVMe Controllers 00:16:04.698 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.698 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:04.698 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:04.698 Initialization complete. Launching workers. 00:16:04.698 ======================================================== 00:16:04.698 Latency(us) 00:16:04.698 Device Information : IOPS MiB/s Average min max 00:16:04.698 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8533.43 33.33 3750.18 587.68 10770.06 00:16:04.698 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.51 15.03 8334.93 6079.69 16547.32 00:16:04.698 ======================================================== 00:16:04.698 Total : 12381.93 48.37 5175.19 587.68 16547.32 00:16:04.698 00:16:04.698 13:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:04.698 13:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:07.230 Initializing NVMe Controllers 00:16:07.230 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.230 Controller IO queue size 128, less than required. 00:16:07.230 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.230 Controller IO queue size 128, less than required. 00:16:07.230 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:07.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:07.230 Initialization complete. Launching workers. 00:16:07.230 ======================================================== 00:16:07.230 Latency(us) 00:16:07.230 Device Information : IOPS MiB/s Average min max 00:16:07.230 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1702.48 425.62 77024.09 36335.48 114753.02 00:16:07.230 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 685.99 171.50 189712.33 59604.94 300138.62 00:16:07.230 ======================================================== 00:16:07.230 Total : 2388.47 597.12 109389.23 36335.48 300138.62 00:16:07.230 00:16:07.230 13:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:07.489 Initializing NVMe Controllers 00:16:07.489 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.489 Controller IO queue size 128, less than required. 00:16:07.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.489 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:07.489 Controller IO queue size 128, less than required. 00:16:07.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.489 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:07.489 WARNING: Some requested NVMe devices were skipped 00:16:07.489 No valid NVMe controllers or AIO or URING devices found 00:16:07.489 13:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:10.017 Initializing NVMe Controllers 00:16:10.017 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.017 Controller IO queue size 128, less than required. 00:16:10.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:10.017 Controller IO queue size 128, less than required. 00:16:10.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:10.017 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:10.017 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:10.017 Initialization complete. Launching workers. 00:16:10.017 00:16:10.017 ==================== 00:16:10.017 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:10.017 TCP transport: 00:16:10.017 polls: 8193 00:16:10.017 idle_polls: 4259 00:16:10.017 sock_completions: 3934 00:16:10.017 nvme_completions: 6209 00:16:10.017 submitted_requests: 9246 00:16:10.017 queued_requests: 1 00:16:10.017 00:16:10.017 ==================== 00:16:10.017 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:10.017 TCP transport: 00:16:10.017 polls: 10251 00:16:10.017 idle_polls: 6365 00:16:10.017 sock_completions: 3886 00:16:10.017 nvme_completions: 6739 00:16:10.017 submitted_requests: 10176 00:16:10.017 queued_requests: 1 00:16:10.017 ======================================================== 00:16:10.017 Latency(us) 00:16:10.017 Device Information : IOPS MiB/s Average min max 00:16:10.017 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1551.65 387.91 84531.13 51925.92 123778.33 00:16:10.017 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1684.12 421.03 76318.22 39313.36 122345.00 00:16:10.017 ======================================================== 00:16:10.017 Total : 3235.77 808.94 80256.56 39313.36 123778.33 00:16:10.017 00:16:10.017 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:10.017 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.275 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.275 rmmod nvme_tcp 00:16:10.275 rmmod nvme_fabrics 00:16:10.546 rmmod nvme_keyring 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74499 ']' 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74499 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74499 ']' 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74499 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74499 00:16:10.546 killing process with pid 74499 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74499' 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74499 00:16:10.546 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74499 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.134 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.393 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:11.394 ************************************ 00:16:11.394 END TEST nvmf_perf 00:16:11.394 ************************************ 00:16:11.394 00:16:11.394 real 0m14.456s 00:16:11.394 user 0m52.090s 00:16:11.394 sys 0m4.097s 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.394 ************************************ 00:16:11.394 START TEST nvmf_fio_host 00:16:11.394 ************************************ 00:16:11.394 13:02:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:11.654 * Looking for test storage... 00:16:11.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:11.654 13:02:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.654 13:02:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.654 13:02:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.654 --rc genhtml_branch_coverage=1 00:16:11.654 --rc genhtml_function_coverage=1 00:16:11.654 --rc genhtml_legend=1 00:16:11.654 --rc geninfo_all_blocks=1 00:16:11.654 --rc geninfo_unexecuted_blocks=1 00:16:11.654 00:16:11.654 ' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.654 --rc genhtml_branch_coverage=1 00:16:11.654 --rc genhtml_function_coverage=1 00:16:11.654 --rc genhtml_legend=1 00:16:11.654 --rc geninfo_all_blocks=1 00:16:11.654 --rc geninfo_unexecuted_blocks=1 00:16:11.654 00:16:11.654 ' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.654 --rc genhtml_branch_coverage=1 00:16:11.654 --rc genhtml_function_coverage=1 00:16:11.654 --rc genhtml_legend=1 00:16:11.654 --rc geninfo_all_blocks=1 00:16:11.654 --rc geninfo_unexecuted_blocks=1 00:16:11.654 00:16:11.654 ' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.654 --rc genhtml_branch_coverage=1 00:16:11.654 --rc genhtml_function_coverage=1 00:16:11.654 --rc genhtml_legend=1 00:16:11.654 --rc geninfo_all_blocks=1 00:16:11.654 --rc geninfo_unexecuted_blocks=1 00:16:11.654 00:16:11.654 ' 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.654 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.655 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:11.655 Cannot find device "nvmf_init_br" 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:11.655 Cannot find device "nvmf_init_br2" 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:11.655 Cannot find device "nvmf_tgt_br" 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:11.655 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.914 Cannot find device "nvmf_tgt_br2" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:11.914 Cannot find device "nvmf_init_br" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:11.914 Cannot find device "nvmf_init_br2" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:11.914 Cannot find device "nvmf_tgt_br" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:11.914 Cannot find device "nvmf_tgt_br2" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:11.914 Cannot find device "nvmf_br" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:11.914 Cannot find device "nvmf_init_if" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:11.914 Cannot find device "nvmf_init_if2" 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:11.914 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:11.915 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:12.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:16:12.174 00:16:12.174 --- 10.0.0.3 ping statistics --- 00:16:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.174 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:12.174 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:12.174 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:12.174 00:16:12.174 --- 10.0.0.4 ping statistics --- 00:16:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.174 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:12.174 00:16:12.174 --- 10.0.0.1 ping statistics --- 00:16:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.174 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:12.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:12.174 00:16:12.174 --- 10.0.0.2 ping statistics --- 00:16:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.174 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74963 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74963 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74963 ']' 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.174 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.174 [2024-11-29 13:02:43.581621] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:12.174 [2024-11-29 13:02:43.581723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.433 [2024-11-29 13:02:43.738421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.433 [2024-11-29 13:02:43.801479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.433 [2024-11-29 13:02:43.801547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.433 [2024-11-29 13:02:43.801571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.433 [2024-11-29 13:02:43.801582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.433 [2024-11-29 13:02:43.801591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.433 [2024-11-29 13:02:43.803020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.433 [2024-11-29 13:02:43.803067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.433 [2024-11-29 13:02:43.803187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.433 [2024-11-29 13:02:43.803180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.433 [2024-11-29 13:02:43.860041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:12.433 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.433 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:12.433 13:02:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.692 [2024-11-29 13:02:44.190624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.950 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:12.950 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.950 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.951 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.209 Malloc1 00:16:13.209 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:13.467 13:02:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:13.732 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:13.994 [2024-11-29 13:02:45.359932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.994 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:14.253 13:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:14.511 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:14.511 fio-3.35 00:16:14.511 Starting 1 thread 00:16:17.045 00:16:17.045 test: (groupid=0, jobs=1): err= 0: pid=75037: Fri Nov 29 13:02:48 2024 00:16:17.045 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:16:17.045 slat (nsec): min=1874, max=283537, avg=2566.85, stdev=3095.17 00:16:17.045 clat (usec): min=2116, max=13443, avg=7496.32, stdev=607.87 00:16:17.045 lat (usec): min=2147, max=13446, avg=7498.89, stdev=607.66 00:16:17.045 clat percentiles (usec): 00:16:17.045 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 7046], 00:16:17.045 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:16:17.045 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8455], 00:16:17.045 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[12256], 99.95th=[12911], 00:16:17.045 | 99.99th=[13304] 00:16:17.045 bw ( KiB/s): min=34672, max=36240, per=99.98%, avg=35516.00, stdev=829.24, samples=4 00:16:17.045 iops : min= 8668, max= 9060, avg=8879.00, stdev=207.31, samples=4 00:16:17.045 write: IOPS=8895, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:16:17.045 slat (nsec): min=1942, max=213072, avg=2663.26, stdev=2284.91 00:16:17.045 clat (usec): min=1981, max=13387, avg=6843.13, stdev=560.44 00:16:17.045 lat (usec): min=1992, max=13389, avg=6845.80, stdev=560.36 00:16:17.045 clat percentiles (usec): 00:16:17.045 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6456], 00:16:17.045 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:16:17.045 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:16:17.045 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[11600], 99.95th=[12911], 00:16:17.045 | 99.99th=[13304] 00:16:17.045 bw ( KiB/s): min=35072, max=36240, per=100.00%, avg=35586.00, stdev=487.60, samples=4 00:16:17.045 iops : min= 8768, max= 9060, avg=8896.50, stdev=121.90, samples=4 00:16:17.045 lat (msec) : 2=0.01%, 4=0.14%, 10=99.68%, 20=0.18% 00:16:17.045 cpu : usr=71.24%, sys=21.39%, ctx=50, majf=0, minf=7 00:16:17.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:17.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.045 issued rwts: total=17824,17854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.045 00:16:17.045 Run status group 0 (all jobs): 00:16:17.045 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:16:17.045 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:17.045 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:17.045 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:17.045 fio-3.35 00:16:17.045 Starting 1 thread 00:16:19.580 00:16:19.580 test: (groupid=0, jobs=1): err= 0: pid=75081: Fri Nov 29 13:02:50 2024 00:16:19.580 read: IOPS=8128, BW=127MiB/s (133MB/s)(255MiB/2009msec) 00:16:19.580 slat (usec): min=2, max=153, avg= 3.79, stdev= 2.31 00:16:19.580 clat (usec): min=2025, max=16909, avg=8811.42, stdev=2537.13 00:16:19.580 lat (usec): min=2029, max=16912, avg=8815.21, stdev=2537.15 00:16:19.580 clat percentiles (usec): 00:16:19.580 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6521], 00:16:19.580 | 30.00th=[ 7242], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:16:19.580 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12125], 95.00th=[13566], 00:16:19.580 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16581], 99.95th=[16712], 00:16:19.580 | 99.99th=[16909] 00:16:19.580 bw ( KiB/s): min=56800, max=73312, per=51.00%, avg=66332.50, stdev=7331.19, samples=4 00:16:19.580 iops : min= 3550, max= 4582, avg=4145.75, stdev=458.17, samples=4 00:16:19.580 write: IOPS=4768, BW=74.5MiB/s (78.1MB/s)(136MiB/1821msec); 0 zone resets 00:16:19.580 slat (usec): min=31, max=350, avg=39.47, stdev= 8.89 00:16:19.580 clat (usec): min=5015, max=22174, avg=12360.52, stdev=2228.12 00:16:19.580 lat (usec): min=5049, max=22211, avg=12399.99, stdev=2229.19 00:16:19.580 clat percentiles (usec): 00:16:19.580 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10552], 00:16:19.580 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:16:19.580 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15401], 95.00th=[16188], 00:16:19.580 | 99.00th=[18482], 99.50th=[19268], 99.90th=[21103], 99.95th=[21627], 00:16:19.580 | 99.99th=[22152] 00:16:19.580 bw ( KiB/s): min=61280, max=76288, per=90.50%, avg=69043.50, stdev=6760.09, samples=4 00:16:19.580 iops : min= 3830, max= 4768, avg=4315.00, stdev=422.34, samples=4 00:16:19.580 lat (msec) : 4=0.30%, 10=49.04%, 20=50.54%, 50=0.11% 00:16:19.580 cpu : usr=81.42%, sys=14.44%, ctx=5, majf=0, minf=12 00:16:19.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:19.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.580 issued rwts: total=16330,8683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.580 00:16:19.580 Run status group 0 (all jobs): 00:16:19.580 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2009-2009msec 00:16:19.580 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=136MiB (142MB), run=1821-1821msec 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.580 13:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.580 rmmod nvme_tcp 00:16:19.580 rmmod nvme_fabrics 00:16:19.580 rmmod nvme_keyring 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74963 ']' 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74963 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74963 ']' 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74963 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74963 00:16:19.580 killing process with pid 74963 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74963' 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74963 00:16:19.580 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74963 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:19.839 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:20.098 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:20.099 ************************************ 00:16:20.099 END TEST nvmf_fio_host 00:16:20.099 ************************************ 00:16:20.099 00:16:20.099 real 0m8.639s 00:16:20.099 user 0m34.293s 00:16:20.099 sys 0m2.415s 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.099 ************************************ 00:16:20.099 START TEST nvmf_failover 00:16:20.099 ************************************ 00:16:20.099 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:20.359 * Looking for test storage... 00:16:20.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.359 --rc genhtml_branch_coverage=1 00:16:20.359 --rc genhtml_function_coverage=1 00:16:20.359 --rc genhtml_legend=1 00:16:20.359 --rc geninfo_all_blocks=1 00:16:20.359 --rc geninfo_unexecuted_blocks=1 00:16:20.359 00:16:20.359 ' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.359 --rc genhtml_branch_coverage=1 00:16:20.359 --rc genhtml_function_coverage=1 00:16:20.359 --rc genhtml_legend=1 00:16:20.359 --rc geninfo_all_blocks=1 00:16:20.359 --rc geninfo_unexecuted_blocks=1 00:16:20.359 00:16:20.359 ' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.359 --rc genhtml_branch_coverage=1 00:16:20.359 --rc genhtml_function_coverage=1 00:16:20.359 --rc genhtml_legend=1 00:16:20.359 --rc geninfo_all_blocks=1 00:16:20.359 --rc geninfo_unexecuted_blocks=1 00:16:20.359 00:16:20.359 ' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.359 --rc genhtml_branch_coverage=1 00:16:20.359 --rc genhtml_function_coverage=1 00:16:20.359 --rc genhtml_legend=1 00:16:20.359 --rc geninfo_all_blocks=1 00:16:20.359 --rc geninfo_unexecuted_blocks=1 00:16:20.359 00:16:20.359 ' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.359 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:20.359 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:20.360 Cannot find device "nvmf_init_br" 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:20.360 Cannot find device "nvmf_init_br2" 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:20.360 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:20.620 Cannot find device "nvmf_tgt_br" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.620 Cannot find device "nvmf_tgt_br2" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:20.620 Cannot find device "nvmf_init_br" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:20.620 Cannot find device "nvmf_init_br2" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:20.620 Cannot find device "nvmf_tgt_br" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:20.620 Cannot find device "nvmf_tgt_br2" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:20.620 Cannot find device "nvmf_br" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:20.620 Cannot find device "nvmf_init_if" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:20.620 Cannot find device "nvmf_init_if2" 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.620 13:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.620 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:20.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.274 ms 00:16:20.880 00:16:20.880 --- 10.0.0.3 ping statistics --- 00:16:20.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.880 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:20.880 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:20.880 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:16:20.880 00:16:20.880 --- 10.0.0.4 ping statistics --- 00:16:20.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.880 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:20.880 00:16:20.880 --- 10.0.0.1 ping statistics --- 00:16:20.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.880 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:20.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:20.880 00:16:20.880 --- 10.0.0.2 ping statistics --- 00:16:20.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.880 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75357 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75357 00:16:20.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75357 ']' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.880 13:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:20.880 [2024-11-29 13:02:52.262999] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:20.880 [2024-11-29 13:02:52.263273] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.140 [2024-11-29 13:02:52.417124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.140 [2024-11-29 13:02:52.489061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.140 [2024-11-29 13:02:52.489360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.140 [2024-11-29 13:02:52.489525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.140 [2024-11-29 13:02:52.489791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.140 [2024-11-29 13:02:52.489807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.140 [2024-11-29 13:02:52.491253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.140 [2024-11-29 13:02:52.491780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.140 [2024-11-29 13:02:52.491792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.140 [2024-11-29 13:02:52.550466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.076 [2024-11-29 13:02:53.517810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.076 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:22.643 Malloc0 00:16:22.643 13:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.902 13:02:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.161 13:02:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:23.421 [2024-11-29 13:02:54.687807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.421 13:02:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:23.679 [2024-11-29 13:02:54.988086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:23.679 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:23.937 [2024-11-29 13:02:55.248270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:23.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75419 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75419 /var/tmp/bdevperf.sock 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75419 ']' 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.937 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.938 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.938 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.938 13:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:24.871 13:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.871 13:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:24.871 13:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:25.477 NVMe0n1 00:16:25.477 13:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:25.735 00:16:25.735 13:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75444 00:16:25.735 13:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:25.735 13:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:26.670 13:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:26.928 13:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:30.216 13:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:30.474 00:16:30.474 13:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:30.732 13:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:34.015 13:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:34.015 [2024-11-29 13:03:05.387899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:34.015 13:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:34.951 13:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:35.516 [2024-11-29 13:03:06.721968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0d660 is same with the state(6) to be set 00:16:35.516 13:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75444 00:16:40.787 { 00:16:40.787 "results": [ 00:16:40.787 { 00:16:40.787 "job": "NVMe0n1", 00:16:40.787 "core_mask": "0x1", 00:16:40.787 "workload": "verify", 00:16:40.787 "status": "finished", 00:16:40.787 "verify_range": { 00:16:40.787 "start": 0, 00:16:40.787 "length": 16384 00:16:40.787 }, 00:16:40.787 "queue_depth": 128, 00:16:40.787 "io_size": 4096, 00:16:40.787 "runtime": 15.007647, 00:16:40.787 "iops": 9148.802607097568, 00:16:40.787 "mibps": 35.73751018397488, 00:16:40.787 "io_failed": 3357, 00:16:40.787 "io_timeout": 0, 00:16:40.787 "avg_latency_us": 13624.448522093082, 00:16:40.787 "min_latency_us": 647.9127272727272, 00:16:40.787 "max_latency_us": 19779.956363636364 00:16:40.787 } 00:16:40.787 ], 00:16:40.787 "core_count": 1 00:16:40.787 } 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75419 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75419 ']' 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75419 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75419 00:16:40.787 killing process with pid 75419 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75419' 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75419 00:16:40.787 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75419 00:16:41.051 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.051 [2024-11-29 13:02:55.325545] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:41.052 [2024-11-29 13:02:55.325661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75419 ] 00:16:41.052 [2024-11-29 13:02:55.477633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.052 [2024-11-29 13:02:55.545379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.052 [2024-11-29 13:02:55.603603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.052 Running I/O for 15 seconds... 00:16:41.052 8071.00 IOPS, 31.53 MiB/s [2024-11-29T13:03:12.567Z] [2024-11-29 13:02:58.397311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.052 [2024-11-29 13:02:58.397858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.397898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.397941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.397975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.397991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.052 [2024-11-29 13:02:58.398689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.052 [2024-11-29 13:02:58.398711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.398962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.398994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.053 [2024-11-29 13:02:58.399775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.053 [2024-11-29 13:02:58.399834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.053 [2024-11-29 13:02:58.399849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.399863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.399903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.399958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.399972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.399988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.400492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.400971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.400985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.401015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.054 [2024-11-29 13:02:58.401045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.054 [2024-11-29 13:02:58.401221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.054 [2024-11-29 13:02:58.401242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:02:58.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:02:58.401778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.055 [2024-11-29 13:02:58.401839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.055 [2024-11-29 13:02:58.401851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78704 len:8 PRP1 0x0 PRP2 0x0 00:16:41.055 [2024-11-29 13:02:58.401864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.401949] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:41.055 [2024-11-29 13:02:58.402008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.055 [2024-11-29 13:02:58.402030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.402045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.055 [2024-11-29 13:02:58.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.402073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.055 [2024-11-29 13:02:58.402087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.402101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.055 [2024-11-29 13:02:58.402115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:02:58.402129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:41.055 [2024-11-29 13:02:58.402183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2305c60 (9): Bad file descriptor 00:16:41.055 [2024-11-29 13:02:58.406030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:41.055 [2024-11-29 13:02:58.431512] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:41.055 8440.00 IOPS, 32.97 MiB/s [2024-11-29T13:03:12.570Z] 8744.00 IOPS, 34.16 MiB/s [2024-11-29T13:03:12.570Z] 9022.00 IOPS, 35.24 MiB/s [2024-11-29T13:03:12.570Z] [2024-11-29 13:03:02.056899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.056992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.055 [2024-11-29 13:03:02.057485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:03:02.057526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:03:02.057566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:03:02.057623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.055 [2024-11-29 13:03:02.057680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.055 [2024-11-29 13:03:02.057701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.057965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.057984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.056 [2024-11-29 13:03:02.058950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.058971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.058991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.056 [2024-11-29 13:03:02.059012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.056 [2024-11-29 13:03:02.059031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.059853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.059976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.059996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.057 [2024-11-29 13:03:02.060435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.060465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.060495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.060524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.060554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.057 [2024-11-29 13:03:02.060570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.057 [2024-11-29 13:03:02.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.060613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.060644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.060674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.060977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.060992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.058 [2024-11-29 13:03:02.061318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.058 [2024-11-29 13:03:02.061806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.058 [2024-11-29 13:03:02.061822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:02.061836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.061901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.059 [2024-11-29 13:03:02.061918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.059 [2024-11-29 13:03:02.061938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99672 len:8 PRP1 0x0 PRP2 0x0 00:16:41.059 [2024-11-29 13:03:02.061953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.062016] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:41.059 [2024-11-29 13:03:02.062073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.059 [2024-11-29 13:03:02.062095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.062110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.059 [2024-11-29 13:03:02.062124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.062138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.059 [2024-11-29 13:03:02.062151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.062166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.059 [2024-11-29 13:03:02.062180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:02.062193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:41.059 [2024-11-29 13:03:02.066146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:41.059 [2024-11-29 13:03:02.066186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2305c60 (9): Bad file descriptor 00:16:41.059 [2024-11-29 13:03:02.097224] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:41.059 9030.00 IOPS, 35.27 MiB/s [2024-11-29T13:03:12.574Z] 9097.00 IOPS, 35.54 MiB/s [2024-11-29T13:03:12.574Z] 9131.14 IOPS, 35.67 MiB/s [2024-11-29T13:03:12.574Z] 9155.75 IOPS, 35.76 MiB/s [2024-11-29T13:03:12.574Z] 9132.00 IOPS, 35.67 MiB/s [2024-11-29T13:03:12.574Z] [2024-11-29 13:03:06.722465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.722857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.722999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.059 [2024-11-29 13:03:06.723408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.723443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.723473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.723502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.723531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.059 [2024-11-29 13:03:06.723555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.059 [2024-11-29 13:03:06.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.723600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.723629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.723660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.723956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.723986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.060 [2024-11-29 13:03:06.724705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.060 [2024-11-29 13:03:06.724848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.060 [2024-11-29 13:03:06.724862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.724888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.724904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.724934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.724950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.724965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.724980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.724995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.061 [2024-11-29 13:03:06.725602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.725977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.061 [2024-11-29 13:03:06.725990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.061 [2024-11-29 13:03:06.726014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.062 [2024-11-29 13:03:06.726331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23758b0 is same with the state(6) to be set 00:16:41.062 [2024-11-29 13:03:06.726363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58072 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58512 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58552 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.062 [2024-11-29 13:03:06.726892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.062 [2024-11-29 13:03:06.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58568 len:8 PRP1 0x0 PRP2 0x0 00:16:41.062 [2024-11-29 13:03:06.726923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.726998] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:41.062 [2024-11-29 13:03:06.727057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.062 [2024-11-29 13:03:06.727088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.727105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.062 [2024-11-29 13:03:06.727119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.727133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.062 [2024-11-29 13:03:06.727147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.727161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.062 [2024-11-29 13:03:06.727175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.062 [2024-11-29 13:03:06.727189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:41.062 [2024-11-29 13:03:06.731058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:41.062 [2024-11-29 13:03:06.731107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2305c60 (9): Bad file descriptor 00:16:41.062 [2024-11-29 13:03:06.755477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:41.062 9087.50 IOPS, 35.50 MiB/s [2024-11-29T13:03:12.577Z] 9134.00 IOPS, 35.68 MiB/s [2024-11-29T13:03:12.577Z] 9173.50 IOPS, 35.83 MiB/s [2024-11-29T13:03:12.577Z] 9134.00 IOPS, 35.68 MiB/s [2024-11-29T13:03:12.577Z] 9139.86 IOPS, 35.70 MiB/s [2024-11-29T13:03:12.577Z] 9147.87 IOPS, 35.73 MiB/s 00:16:41.062 Latency(us) 00:16:41.062 [2024-11-29T13:03:12.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.062 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:41.062 Verification LBA range: start 0x0 length 0x4000 00:16:41.062 NVMe0n1 : 15.01 9148.80 35.74 223.69 0.00 13624.45 647.91 19779.96 00:16:41.062 [2024-11-29T13:03:12.577Z] =================================================================================================================== 00:16:41.062 [2024-11-29T13:03:12.577Z] Total : 9148.80 35.74 223.69 0.00 13624.45 647.91 19779.96 00:16:41.062 Received shutdown signal, test time was about 15.000000 seconds 00:16:41.062 00:16:41.063 Latency(us) 00:16:41.063 [2024-11-29T13:03:12.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.063 [2024-11-29T13:03:12.578Z] =================================================================================================================== 00:16:41.063 [2024-11-29T13:03:12.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75618 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75618 /var/tmp/bdevperf.sock 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75618 ']' 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.063 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:41.356 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.356 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:41.356 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:41.615 [2024-11-29 13:03:13.117875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:41.874 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:41.874 [2024-11-29 13:03:13.382088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:42.133 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:42.393 NVMe0n1 00:16:42.393 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:42.651 00:16:42.651 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:42.909 00:16:42.909 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:42.909 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:43.168 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.426 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:46.715 13:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:46.715 13:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:46.715 13:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75690 00:16:46.715 13:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:46.715 13:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75690 00:16:48.090 { 00:16:48.090 "results": [ 00:16:48.090 { 00:16:48.090 "job": "NVMe0n1", 00:16:48.090 "core_mask": "0x1", 00:16:48.090 "workload": "verify", 00:16:48.090 "status": "finished", 00:16:48.090 "verify_range": { 00:16:48.090 "start": 0, 00:16:48.090 "length": 16384 00:16:48.090 }, 00:16:48.090 "queue_depth": 128, 00:16:48.090 "io_size": 4096, 00:16:48.090 "runtime": 1.015754, 00:16:48.090 "iops": 6118.607458105013, 00:16:48.090 "mibps": 23.90081038322271, 00:16:48.090 "io_failed": 0, 00:16:48.090 "io_timeout": 0, 00:16:48.090 "avg_latency_us": 20833.758036422147, 00:16:48.090 "min_latency_us": 2412.9163636363637, 00:16:48.090 "max_latency_us": 17873.454545454544 00:16:48.090 } 00:16:48.090 ], 00:16:48.090 "core_count": 1 00:16:48.090 } 00:16:48.090 13:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.090 [2024-11-29 13:03:12.527621] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:48.090 [2024-11-29 13:03:12.527733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75618 ] 00:16:48.090 [2024-11-29 13:03:12.669097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.090 [2024-11-29 13:03:12.719812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.090 [2024-11-29 13:03:12.774041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.090 [2024-11-29 13:03:14.864992] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:48.090 [2024-11-29 13:03:14.865128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.090 [2024-11-29 13:03:14.865152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.090 [2024-11-29 13:03:14.865186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.091 [2024-11-29 13:03:14.865200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.091 [2024-11-29 13:03:14.865214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.091 [2024-11-29 13:03:14.865227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.091 [2024-11-29 13:03:14.865241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.091 [2024-11-29 13:03:14.865254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.091 [2024-11-29 13:03:14.865268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:48.091 [2024-11-29 13:03:14.865316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:48.091 [2024-11-29 13:03:14.865347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160bc60 (9): Bad file descriptor 00:16:48.091 [2024-11-29 13:03:14.869516] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:48.091 Running I/O for 1 seconds... 00:16:48.091 6086.00 IOPS, 23.77 MiB/s 00:16:48.091 Latency(us) 00:16:48.091 [2024-11-29T13:03:19.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.091 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:48.091 Verification LBA range: start 0x0 length 0x4000 00:16:48.091 NVMe0n1 : 1.02 6118.61 23.90 0.00 0.00 20833.76 2412.92 17873.45 00:16:48.091 [2024-11-29T13:03:19.606Z] =================================================================================================================== 00:16:48.091 [2024-11-29T13:03:19.606Z] Total : 6118.61 23.90 0.00 0.00 20833.76 2412.92 17873.45 00:16:48.091 13:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:48.091 13:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:48.349 13:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.619 13:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:48.619 13:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:48.910 13:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:49.169 13:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75618 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75618 ']' 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75618 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75618 00:16:52.492 killing process with pid 75618 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75618' 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75618 00:16:52.492 13:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75618 00:16:52.750 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:52.750 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.316 rmmod nvme_tcp 00:16:53.316 rmmod nvme_fabrics 00:16:53.316 rmmod nvme_keyring 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75357 ']' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75357 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75357 ']' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75357 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75357 00:16:53.316 killing process with pid 75357 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75357' 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75357 00:16:53.316 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75357 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:53.575 13:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:53.575 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:53.831 ************************************ 00:16:53.831 END TEST nvmf_failover 00:16:53.831 ************************************ 00:16:53.831 00:16:53.831 real 0m33.622s 00:16:53.831 user 2m9.764s 00:16:53.831 sys 0m5.599s 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.831 ************************************ 00:16:53.831 START TEST nvmf_host_discovery 00:16:53.831 ************************************ 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:53.831 * Looking for test storage... 00:16:53.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:53.831 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.089 --rc genhtml_branch_coverage=1 00:16:54.089 --rc genhtml_function_coverage=1 00:16:54.089 --rc genhtml_legend=1 00:16:54.089 --rc geninfo_all_blocks=1 00:16:54.089 --rc geninfo_unexecuted_blocks=1 00:16:54.089 00:16:54.089 ' 00:16:54.089 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.090 --rc genhtml_branch_coverage=1 00:16:54.090 --rc genhtml_function_coverage=1 00:16:54.090 --rc genhtml_legend=1 00:16:54.090 --rc geninfo_all_blocks=1 00:16:54.090 --rc geninfo_unexecuted_blocks=1 00:16:54.090 00:16:54.090 ' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.090 --rc genhtml_branch_coverage=1 00:16:54.090 --rc genhtml_function_coverage=1 00:16:54.090 --rc genhtml_legend=1 00:16:54.090 --rc geninfo_all_blocks=1 00:16:54.090 --rc geninfo_unexecuted_blocks=1 00:16:54.090 00:16:54.090 ' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.090 --rc genhtml_branch_coverage=1 00:16:54.090 --rc genhtml_function_coverage=1 00:16:54.090 --rc genhtml_legend=1 00:16:54.090 --rc geninfo_all_blocks=1 00:16:54.090 --rc geninfo_unexecuted_blocks=1 00:16:54.090 00:16:54.090 ' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:54.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:54.090 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:54.091 Cannot find device "nvmf_init_br" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:54.091 Cannot find device "nvmf_init_br2" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:54.091 Cannot find device "nvmf_tgt_br" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.091 Cannot find device "nvmf_tgt_br2" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:54.091 Cannot find device "nvmf_init_br" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:54.091 Cannot find device "nvmf_init_br2" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:54.091 Cannot find device "nvmf_tgt_br" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:54.091 Cannot find device "nvmf_tgt_br2" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:54.091 Cannot find device "nvmf_br" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:54.091 Cannot find device "nvmf_init_if" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:54.091 Cannot find device "nvmf_init_if2" 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.091 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:54.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.269 ms 00:16:54.350 00:16:54.350 --- 10.0.0.3 ping statistics --- 00:16:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.350 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:54.350 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:54.350 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:16:54.350 00:16:54.350 --- 10.0.0.4 ping statistics --- 00:16:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.350 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:54.350 00:16:54.350 --- 10.0.0.1 ping statistics --- 00:16:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.350 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:54.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:54.350 00:16:54.350 --- 10.0.0.2 ping statistics --- 00:16:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.350 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76030 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76030 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76030 ']' 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.350 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.351 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.351 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.351 13:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 [2024-11-29 13:03:25.905635] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:54.609 [2024-11-29 13:03:25.905748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.609 [2024-11-29 13:03:26.060550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.868 [2024-11-29 13:03:26.141158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.868 [2024-11-29 13:03:26.141259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.868 [2024-11-29 13:03:26.141285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.868 [2024-11-29 13:03:26.141296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.868 [2024-11-29 13:03:26.141305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.868 [2024-11-29 13:03:26.141831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.868 [2024-11-29 13:03:26.219990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:55.803 13:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.803 13:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:55.803 13:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:55.803 13:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.803 13:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 [2024-11-29 13:03:27.032099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 [2024-11-29 13:03:27.044291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 null0 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 null1 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76067 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76067 /tmp/host.sock 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76067 ']' 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.803 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.803 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.803 [2024-11-29 13:03:27.136056] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:16:55.803 [2024-11-29 13:03:27.136163] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76067 ] 00:16:55.803 [2024-11-29 13:03:27.284742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.062 [2024-11-29 13:03:27.350005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.062 [2024-11-29 13:03:27.408773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:56.062 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:56.321 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.580 [2024-11-29 13:03:27.860513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:56.580 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:56.581 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:56.581 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.581 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.581 13:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:56.581 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.839 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:56.839 13:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:57.099 [2024-11-29 13:03:28.498014] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:57.099 [2024-11-29 13:03:28.498052] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:57.099 [2024-11-29 13:03:28.498096] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:57.099 [2024-11-29 13:03:28.504085] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:57.099 [2024-11-29 13:03:28.558651] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:57.099 [2024-11-29 13:03:28.559861] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d95e60:1 started. 00:16:57.099 [2024-11-29 13:03:28.562159] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:57.099 [2024-11-29 13:03:28.562202] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:57.099 [2024-11-29 13:03:28.566181] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d95e60 was disconnected and freed. delete nvme_qpair. 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.667 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:57.668 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:57.927 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:57.928 [2024-11-29 13:03:29.341008] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1da42f0:1 started. 00:16:57.928 [2024-11-29 13:03:29.346893] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1da42f0 was disconnected and freed. delete nvme_qpair. 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.188 [2024-11-29 13:03:29.450446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:58.188 [2024-11-29 13:03:29.451108] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:58.188 [2024-11-29 13:03:29.451153] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:58.188 [2024-11-29 13:03:29.457064] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:58.188 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 [2024-11-29 13:03:29.520549] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:58.189 [2024-11-29 13:03:29.520617] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:58.189 [2024-11-29 13:03:29.520631] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:58.189 [2024-11-29 13:03:29.520637] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 [2024-11-29 13:03:29.682908] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:58.189 [2024-11-29 13:03:29.682950] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:58.189 [2024-11-29 13:03:29.688888] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:58.189 [2024-11-29 13:03:29.688931] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:58.189 [2024-11-29 13:03:29.689039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.189 [2024-11-29 13:03:29.689084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.189 [2024-11-29 13:03:29.689113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.189 [2024-11-29 13:03:29.689130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.189 [2024-11-29 13:03:29.689148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.189 [2024-11-29 13:03:29.689165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.189 [2024-11-29 13:03:29.689182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.189 [2024-11-29 13:03:29.689200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.189 [2024-11-29 13:03:29.689219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72240 is same with the state(6) to be set 00:16:58.189 [2024-11-29 13:03:29.689283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72240 (9): Bad file descriptor 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.189 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.449 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.708 13:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 13:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 [2024-11-29 13:03:31.128752] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:59.643 [2024-11-29 13:03:31.128805] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:59.643 [2024-11-29 13:03:31.128842] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:59.643 [2024-11-29 13:03:31.134809] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:59.902 [2024-11-29 13:03:31.193301] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:59.902 [2024-11-29 13:03:31.194240] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d6ad70:1 started. 00:16:59.902 [2024-11-29 13:03:31.196789] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:59.902 [2024-11-29 13:03:31.196854] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:59.902 [2024-11-29 13:03:31.198582] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d6ad70 was disconnected and freed. delete nvme_qpair. 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.902 request: 00:16:59.902 { 00:16:59.902 "name": "nvme", 00:16:59.902 "trtype": "tcp", 00:16:59.902 "traddr": "10.0.0.3", 00:16:59.902 "adrfam": "ipv4", 00:16:59.902 "trsvcid": "8009", 00:16:59.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:59.902 "wait_for_attach": true, 00:16:59.902 "method": "bdev_nvme_start_discovery", 00:16:59.902 "req_id": 1 00:16:59.902 } 00:16:59.902 Got JSON-RPC error response 00:16:59.902 response: 00:16:59.902 { 00:16:59.902 "code": -17, 00:16:59.902 "message": "File exists" 00:16:59.902 } 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.902 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.902 request: 00:16:59.902 { 00:16:59.902 "name": "nvme_second", 00:16:59.902 "trtype": "tcp", 00:16:59.902 "traddr": "10.0.0.3", 00:16:59.902 "adrfam": "ipv4", 00:16:59.902 "trsvcid": "8009", 00:16:59.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:59.902 "wait_for_attach": true, 00:16:59.902 "method": "bdev_nvme_start_discovery", 00:16:59.902 "req_id": 1 00:16:59.902 } 00:16:59.902 Got JSON-RPC error response 00:16:59.902 response: 00:16:59.902 { 00:16:59.903 "code": -17, 00:16:59.903 "message": "File exists" 00:16:59.903 } 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.161 13:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.096 [2024-11-29 13:03:32.449332] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.096 [2024-11-29 13:03:32.449441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6dbc0 with addr=10.0.0.3, port=8010 00:17:01.096 [2024-11-29 13:03:32.449467] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:01.096 [2024-11-29 13:03:32.449479] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:01.096 [2024-11-29 13:03:32.449489] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:02.032 [2024-11-29 13:03:33.449291] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:02.032 [2024-11-29 13:03:33.449373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d5a0 with addr=10.0.0.3, port=8010 00:17:02.032 [2024-11-29 13:03:33.449411] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:02.032 [2024-11-29 13:03:33.449428] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:02.032 [2024-11-29 13:03:33.449442] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:03.006 [2024-11-29 13:03:34.449125] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:03.006 request: 00:17:03.006 { 00:17:03.006 "name": "nvme_second", 00:17:03.006 "trtype": "tcp", 00:17:03.006 "traddr": "10.0.0.3", 00:17:03.006 "adrfam": "ipv4", 00:17:03.006 "trsvcid": "8010", 00:17:03.006 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:03.006 "wait_for_attach": false, 00:17:03.006 "attach_timeout_ms": 3000, 00:17:03.006 "method": "bdev_nvme_start_discovery", 00:17:03.006 "req_id": 1 00:17:03.006 } 00:17:03.006 Got JSON-RPC error response 00:17:03.006 response: 00:17:03.006 { 00:17:03.006 "code": -110, 00:17:03.006 "message": "Connection timed out" 00:17:03.006 } 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:03.006 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76067 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.265 rmmod nvme_tcp 00:17:03.265 rmmod nvme_fabrics 00:17:03.265 rmmod nvme_keyring 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76030 ']' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76030 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76030 ']' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76030 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76030 00:17:03.265 killing process with pid 76030 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76030' 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76030 00:17:03.265 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76030 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:03.524 13:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:03.524 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.524 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:03.524 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:03.524 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:03.524 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:03.782 00:17:03.782 real 0m9.958s 00:17:03.782 user 0m18.264s 00:17:03.782 sys 0m2.191s 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.782 ************************************ 00:17:03.782 END TEST nvmf_host_discovery 00:17:03.782 ************************************ 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.782 ************************************ 00:17:03.782 START TEST nvmf_host_multipath_status 00:17:03.782 ************************************ 00:17:03.782 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:04.042 * Looking for test storage... 00:17:04.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.042 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.043 --rc genhtml_branch_coverage=1 00:17:04.043 --rc genhtml_function_coverage=1 00:17:04.043 --rc genhtml_legend=1 00:17:04.043 --rc geninfo_all_blocks=1 00:17:04.043 --rc geninfo_unexecuted_blocks=1 00:17:04.043 00:17:04.043 ' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.043 --rc genhtml_branch_coverage=1 00:17:04.043 --rc genhtml_function_coverage=1 00:17:04.043 --rc genhtml_legend=1 00:17:04.043 --rc geninfo_all_blocks=1 00:17:04.043 --rc geninfo_unexecuted_blocks=1 00:17:04.043 00:17:04.043 ' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.043 --rc genhtml_branch_coverage=1 00:17:04.043 --rc genhtml_function_coverage=1 00:17:04.043 --rc genhtml_legend=1 00:17:04.043 --rc geninfo_all_blocks=1 00:17:04.043 --rc geninfo_unexecuted_blocks=1 00:17:04.043 00:17:04.043 ' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.043 --rc genhtml_branch_coverage=1 00:17:04.043 --rc genhtml_function_coverage=1 00:17:04.043 --rc genhtml_legend=1 00:17:04.043 --rc geninfo_all_blocks=1 00:17:04.043 --rc geninfo_unexecuted_blocks=1 00:17:04.043 00:17:04.043 ' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:04.043 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:04.044 Cannot find device "nvmf_init_br" 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:04.044 Cannot find device "nvmf_init_br2" 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:04.044 Cannot find device "nvmf_tgt_br" 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.044 Cannot find device "nvmf_tgt_br2" 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:04.044 Cannot find device "nvmf_init_br" 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:04.044 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:04.044 Cannot find device "nvmf_init_br2" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:04.303 Cannot find device "nvmf_tgt_br" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:04.303 Cannot find device "nvmf_tgt_br2" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:04.303 Cannot find device "nvmf_br" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:04.303 Cannot find device "nvmf_init_if" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:04.303 Cannot find device "nvmf_init_if2" 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.303 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:04.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:04.562 00:17:04.562 --- 10.0.0.3 ping statistics --- 00:17:04.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.562 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:04.562 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:04.562 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:17:04.562 00:17:04.562 --- 10.0.0.4 ping statistics --- 00:17:04.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.562 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:04.562 00:17:04.562 --- 10.0.0.1 ping statistics --- 00:17:04.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.562 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:04.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:04.562 00:17:04.562 --- 10.0.0.2 ping statistics --- 00:17:04.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.562 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76563 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76563 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76563 ']' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:04.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.562 13:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:04.562 [2024-11-29 13:03:35.969138] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:17:04.562 [2024-11-29 13:03:35.969239] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.820 [2024-11-29 13:03:36.122523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.820 [2024-11-29 13:03:36.184209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.820 [2024-11-29 13:03:36.184271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.820 [2024-11-29 13:03:36.184285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.820 [2024-11-29 13:03:36.184295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.820 [2024-11-29 13:03:36.184305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.820 [2024-11-29 13:03:36.185667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.820 [2024-11-29 13:03:36.185680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.820 [2024-11-29 13:03:36.245951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.820 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.820 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:04.820 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.820 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.820 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:05.078 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.078 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76563 00:17:05.078 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.338 [2024-11-29 13:03:36.656101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.338 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:05.598 Malloc0 00:17:05.598 13:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:05.857 13:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:06.114 13:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:06.371 [2024-11-29 13:03:37.766012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:06.371 13:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:06.629 [2024-11-29 13:03:38.002185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:06.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76611 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76611 /var/tmp/bdevperf.sock 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76611 ']' 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.629 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.630 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.630 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.630 13:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:07.564 13:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.564 13:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:07.564 13:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:08.131 13:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:08.389 Nvme0n1 00:17:08.389 13:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:08.647 Nvme0n1 00:17:08.647 13:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:08.647 13:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:10.550 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:10.550 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:11.116 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:11.116 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:12.497 13:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.797 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.797 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:12.797 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.797 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:13.055 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.055 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:13.055 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.055 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:13.312 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.312 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:13.312 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.312 13:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:13.571 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.571 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:13.571 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.571 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:14.148 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.148 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:14.148 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:14.407 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:14.666 13:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:15.602 13:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:15.602 13:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:15.602 13:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.602 13:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:15.860 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.860 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:15.860 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.860 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:16.117 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.117 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:16.117 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.117 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:16.684 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.684 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:16.684 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.684 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:16.943 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.943 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:16.943 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.943 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:17.201 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.201 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:17.201 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:17.201 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.460 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.460 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:17.460 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:17.719 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:17.979 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:19.353 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.612 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:19.612 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:19.612 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.612 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:19.871 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.871 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:19.871 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.871 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:20.129 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.129 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:20.129 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.129 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:20.387 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.387 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:20.387 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.387 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:20.952 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.952 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:20.952 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:20.952 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:21.517 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:22.464 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:22.464 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:22.464 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.464 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:22.723 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.723 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:22.723 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.723 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:22.982 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:22.982 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:22.982 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:22.982 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.239 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.240 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:23.240 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.240 13:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:23.803 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.804 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:23.804 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.804 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:24.061 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.061 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:24.061 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.061 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:24.318 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:24.318 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:24.318 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:24.576 13:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:24.833 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:25.770 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:25.770 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:25.770 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.770 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:26.337 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.337 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:26.337 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.337 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:26.596 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.596 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:26.596 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:26.596 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.854 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:26.854 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:26.854 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:26.854 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.113 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.113 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:27.113 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.113 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:27.681 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.681 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:27.681 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.681 13:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:27.681 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.681 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:27.681 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:28.247 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:28.506 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:29.445 13:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:29.445 13:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:29.445 13:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.445 13:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:30.015 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:30.015 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:30.015 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:30.015 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.272 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.272 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:30.272 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.272 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:30.530 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.530 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:30.530 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.530 13:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:30.788 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.788 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:30.788 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.788 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:31.355 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:31.355 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:31.355 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.355 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:31.614 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.614 13:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:31.873 13:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:31.873 13:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:32.132 13:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:32.390 13:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:33.324 13:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:33.324 13:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:33.324 13:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.324 13:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:33.892 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.892 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:33.892 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.892 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:34.162 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.162 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:34.162 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.162 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:34.426 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.426 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:34.426 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.426 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:34.685 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.686 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:34.686 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:34.686 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.944 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.944 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:34.944 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.944 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:35.511 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.511 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:35.511 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:35.770 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:36.028 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:36.966 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:36.966 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:36.966 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.966 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:37.244 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:37.244 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:37.244 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.244 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:37.825 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.825 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:37.825 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.825 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:38.083 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.083 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:38.083 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.083 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:38.342 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.342 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:38.342 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.342 13:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:38.600 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.600 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:38.600 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.600 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:38.859 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.859 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:38.859 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:39.425 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:39.683 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:40.618 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:40.618 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:40.618 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.618 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:40.877 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.877 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:40.877 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.877 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:41.136 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.136 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:41.136 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.136 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:41.703 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.703 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:41.703 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:41.703 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.961 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.961 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:41.961 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.961 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:42.220 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.220 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:42.220 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.220 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:42.479 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.479 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:42.479 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:42.738 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:42.997 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.372 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:44.631 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.631 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:44.631 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.632 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:44.890 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.890 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:44.890 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.890 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:45.149 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.149 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:45.149 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:45.149 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.407 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.407 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:45.407 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:45.407 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76611 ']' 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:46.040 killing process with pid 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76611' 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76611 00:17:46.040 { 00:17:46.040 "results": [ 00:17:46.040 { 00:17:46.040 "job": "Nvme0n1", 00:17:46.040 "core_mask": "0x4", 00:17:46.040 "workload": "verify", 00:17:46.040 "status": "terminated", 00:17:46.040 "verify_range": { 00:17:46.040 "start": 0, 00:17:46.040 "length": 16384 00:17:46.040 }, 00:17:46.040 "queue_depth": 128, 00:17:46.040 "io_size": 4096, 00:17:46.040 "runtime": 37.027171, 00:17:46.040 "iops": 7791.980651181804, 00:17:46.040 "mibps": 30.437424418678923, 00:17:46.040 "io_failed": 0, 00:17:46.040 "io_timeout": 0, 00:17:46.040 "avg_latency_us": 16393.57266806673, 00:17:46.040 "min_latency_us": 1042.6181818181817, 00:17:46.040 "max_latency_us": 4026531.84 00:17:46.040 } 00:17:46.040 ], 00:17:46.040 "core_count": 1 00:17:46.040 } 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76611 00:17:46.040 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:46.040 [2024-11-29 13:03:38.081023] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:17:46.040 [2024-11-29 13:03:38.081167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76611 ] 00:17:46.040 [2024-11-29 13:03:38.235573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.040 [2024-11-29 13:03:38.305052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.040 [2024-11-29 13:03:38.364596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.040 Running I/O for 90 seconds... 00:17:46.040 7247.00 IOPS, 28.31 MiB/s [2024-11-29T13:04:17.555Z] 7971.50 IOPS, 31.14 MiB/s [2024-11-29T13:04:17.555Z] 8194.33 IOPS, 32.01 MiB/s [2024-11-29T13:04:17.555Z] 8219.75 IOPS, 32.11 MiB/s [2024-11-29T13:04:17.555Z] 8187.00 IOPS, 31.98 MiB/s [2024-11-29T13:04:17.555Z] 8112.00 IOPS, 31.69 MiB/s [2024-11-29T13:04:17.555Z] 8089.14 IOPS, 31.60 MiB/s [2024-11-29T13:04:17.555Z] 8094.00 IOPS, 31.62 MiB/s [2024-11-29T13:04:17.555Z] 8071.56 IOPS, 31.53 MiB/s [2024-11-29T13:04:17.555Z] 8057.50 IOPS, 31.47 MiB/s [2024-11-29T13:04:17.555Z] 8051.91 IOPS, 31.45 MiB/s [2024-11-29T13:04:17.555Z] 8055.17 IOPS, 31.47 MiB/s [2024-11-29T13:04:17.556Z] 8061.38 IOPS, 31.49 MiB/s [2024-11-29T13:04:17.556Z] 8053.57 IOPS, 31.46 MiB/s [2024-11-29T13:04:17.556Z] 8045.73 IOPS, 31.43 MiB/s [2024-11-29T13:04:17.556Z] [2024-11-29 13:03:55.910501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.910937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.910959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.911924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.911962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.911981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.041 [2024-11-29 13:03:55.912249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.912285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.912335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.912371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:46.041 [2024-11-29 13:03:55.912393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.041 [2024-11-29 13:03:55.912412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.912851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.912900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.912971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.912986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.042 [2024-11-29 13:03:55.913791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.913974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.913989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.914010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.914031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.914053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.914068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.914089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.914109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:46.042 [2024-11-29 13:03:55.914130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.042 [2024-11-29 13:03:55.914151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.914769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.914968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.914990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.915387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.915401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.043 [2024-11-29 13:03:55.916206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:46.043 [2024-11-29 13:03:55.916586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.043 [2024-11-29 13:03:55.916606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:46.043 7884.38 IOPS, 30.80 MiB/s [2024-11-29T13:04:17.558Z] 7420.59 IOPS, 28.99 MiB/s [2024-11-29T13:04:17.558Z] 7008.33 IOPS, 27.38 MiB/s [2024-11-29T13:04:17.558Z] 6639.47 IOPS, 25.94 MiB/s [2024-11-29T13:04:17.558Z] 6418.95 IOPS, 25.07 MiB/s [2024-11-29T13:04:17.558Z] 6488.24 IOPS, 25.34 MiB/s [2024-11-29T13:04:17.558Z] 6553.32 IOPS, 25.60 MiB/s [2024-11-29T13:04:17.558Z] 6641.26 IOPS, 25.94 MiB/s [2024-11-29T13:04:17.558Z] 6685.33 IOPS, 26.11 MiB/s [2024-11-29T13:04:17.558Z] 6771.12 IOPS, 26.45 MiB/s [2024-11-29T13:04:17.559Z] 6943.19 IOPS, 27.12 MiB/s [2024-11-29T13:04:17.559Z] 7104.04 IOPS, 27.75 MiB/s [2024-11-29T13:04:17.559Z] 7155.25 IOPS, 27.95 MiB/s [2024-11-29T13:04:17.559Z] 7202.31 IOPS, 28.13 MiB/s [2024-11-29T13:04:17.559Z] 7252.37 IOPS, 28.33 MiB/s [2024-11-29T13:04:17.559Z] 7319.65 IOPS, 28.59 MiB/s [2024-11-29T13:04:17.559Z] 7448.50 IOPS, 29.10 MiB/s [2024-11-29T13:04:17.559Z] 7576.36 IOPS, 29.60 MiB/s [2024-11-29T13:04:17.559Z] 7683.18 IOPS, 30.01 MiB/s [2024-11-29T13:04:17.559Z] [2024-11-29 13:04:14.479278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.479756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.479792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.479827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.479864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.479932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.479996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.044 [2024-11-29 13:04:14.480619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.044 [2024-11-29 13:04:14.480943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:46.044 [2024-11-29 13:04:14.480964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.481860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.481972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.481995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.482034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.482056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.482071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.045 [2024-11-29 13:04:14.484525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.484567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.484604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.484654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.045 [2024-11-29 13:04:14.484692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:46.045 [2024-11-29 13:04:14.484714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.484973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.484995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.485928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.485968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.485990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.486005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.486041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.486084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.046 [2024-11-29 13:04:14.486121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.486165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.486204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.486240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.046 [2024-11-29 13:04:14.486278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:46.046 [2024-11-29 13:04:14.486311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.486327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.486559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.486594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.486627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.486949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.486982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.486999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.487071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.487090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.487112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.487127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.487169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.489768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.489797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.489838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.489875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.489901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.489953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.489988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.490036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.490103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.490184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.490236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.490273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.490310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.490332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.490350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.491401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.491448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.047 [2024-11-29 13:04:14.491485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.047 [2024-11-29 13:04:14.491837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:46.047 [2024-11-29 13:04:14.491858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.491872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.491916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.491930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.491951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.491966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.492959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.492982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.493175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.493226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.493328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.048 [2024-11-29 13:04:14.493365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:46.048 [2024-11-29 13:04:14.493423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.048 [2024-11-29 13:04:14.493437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.493459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.493474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.493496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.493526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.495906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.495973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.495998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.496229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.496316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.496365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.496522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.049 [2024-11-29 13:04:14.496641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.496711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.049 [2024-11-29 13:04:14.496736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:46.049 [2024-11-29 13:04:14.497788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.497815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.497841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.497868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.497890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.497905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.497926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.497940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.497962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.497993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.498831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.498928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.498982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.499010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.500127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.500187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.500226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.500263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.500300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.050 [2024-11-29 13:04:14.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:46.050 [2024-11-29 13:04:14.500358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.050 [2024-11-29 13:04:14.500373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:46.050 7733.57 IOPS, 30.21 MiB/s [2024-11-29T13:04:17.565Z] 7773.42 IOPS, 30.36 MiB/s [2024-11-29T13:04:17.565Z] 7793.59 IOPS, 30.44 MiB/s [2024-11-29T13:04:17.565Z] Received shutdown signal, test time was about 37.028007 seconds 00:17:46.050 00:17:46.050 Latency(us) 00:17:46.050 [2024-11-29T13:04:17.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.050 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:46.050 Verification LBA range: start 0x0 length 0x4000 00:17:46.051 Nvme0n1 : 37.03 7791.98 30.44 0.00 0.00 16393.57 1042.62 4026531.84 00:17:46.051 [2024-11-29T13:04:17.566Z] =================================================================================================================== 00:17:46.051 [2024-11-29T13:04:17.566Z] Total : 7791.98 30.44 0.00 0.00 16393.57 1042.62 4026531.84 00:17:46.051 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.314 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:46.314 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:46.314 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:46.314 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.314 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.573 rmmod nvme_tcp 00:17:46.573 rmmod nvme_fabrics 00:17:46.573 rmmod nvme_keyring 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76563 ']' 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76563 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76563 ']' 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76563 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76563 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.573 killing process with pid 76563 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76563' 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76563 00:17:46.573 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76563 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:46.833 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:47.092 00:17:47.092 real 0m43.157s 00:17:47.092 user 2m20.744s 00:17:47.092 sys 0m12.663s 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:47.092 ************************************ 00:17:47.092 END TEST nvmf_host_multipath_status 00:17:47.092 ************************************ 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.092 ************************************ 00:17:47.092 START TEST nvmf_discovery_remove_ifc 00:17:47.092 ************************************ 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:47.092 * Looking for test storage... 00:17:47.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:47.092 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:47.093 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:47.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.353 --rc genhtml_branch_coverage=1 00:17:47.353 --rc genhtml_function_coverage=1 00:17:47.353 --rc genhtml_legend=1 00:17:47.353 --rc geninfo_all_blocks=1 00:17:47.353 --rc geninfo_unexecuted_blocks=1 00:17:47.353 00:17:47.353 ' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:47.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.353 --rc genhtml_branch_coverage=1 00:17:47.353 --rc genhtml_function_coverage=1 00:17:47.353 --rc genhtml_legend=1 00:17:47.353 --rc geninfo_all_blocks=1 00:17:47.353 --rc geninfo_unexecuted_blocks=1 00:17:47.353 00:17:47.353 ' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:47.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.353 --rc genhtml_branch_coverage=1 00:17:47.353 --rc genhtml_function_coverage=1 00:17:47.353 --rc genhtml_legend=1 00:17:47.353 --rc geninfo_all_blocks=1 00:17:47.353 --rc geninfo_unexecuted_blocks=1 00:17:47.353 00:17:47.353 ' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:47.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.353 --rc genhtml_branch_coverage=1 00:17:47.353 --rc genhtml_function_coverage=1 00:17:47.353 --rc genhtml_legend=1 00:17:47.353 --rc geninfo_all_blocks=1 00:17:47.353 --rc geninfo_unexecuted_blocks=1 00:17:47.353 00:17:47.353 ' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.353 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:47.354 Cannot find device "nvmf_init_br" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:47.354 Cannot find device "nvmf_init_br2" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:47.354 Cannot find device "nvmf_tgt_br" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.354 Cannot find device "nvmf_tgt_br2" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:47.354 Cannot find device "nvmf_init_br" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:47.354 Cannot find device "nvmf_init_br2" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:47.354 Cannot find device "nvmf_tgt_br" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:47.354 Cannot find device "nvmf_tgt_br2" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:47.354 Cannot find device "nvmf_br" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:47.354 Cannot find device "nvmf_init_if" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:47.354 Cannot find device "nvmf_init_if2" 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:47.354 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.614 13:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:47.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:47.614 00:17:47.614 --- 10.0.0.3 ping statistics --- 00:17:47.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.614 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:47.614 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:47.614 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:47.614 00:17:47.614 --- 10.0.0.4 ping statistics --- 00:17:47.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.614 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:47.614 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:17:47.615 00:17:47.615 --- 10.0.0.1 ping statistics --- 00:17:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.615 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:47.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:47.615 00:17:47.615 --- 10.0.0.2 ping statistics --- 00:17:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.615 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.615 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.873 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:47.873 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.873 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.873 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77483 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77483 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77483 ']' 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.874 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:47.874 [2024-11-29 13:04:19.209020] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:17:47.874 [2024-11-29 13:04:19.209122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.874 [2024-11-29 13:04:19.365334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.132 [2024-11-29 13:04:19.438683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.132 [2024-11-29 13:04:19.438750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.132 [2024-11-29 13:04:19.438776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.132 [2024-11-29 13:04:19.438786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.132 [2024-11-29 13:04:19.438795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.132 [2024-11-29 13:04:19.439347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.132 [2024-11-29 13:04:19.501029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.132 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.132 [2024-11-29 13:04:19.627834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.133 [2024-11-29 13:04:19.636043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:48.391 null0 00:17:48.391 [2024-11-29 13:04:19.667948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77513 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77513 /tmp/host.sock 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77513 ']' 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:48.391 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.391 13:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.391 [2024-11-29 13:04:19.751218] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:17:48.391 [2024-11-29 13:04:19.751363] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77513 ] 00:17:48.650 [2024-11-29 13:04:19.907620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.650 [2024-11-29 13:04:19.977014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 [2024-11-29 13:04:20.129094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.908 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.908 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:48.908 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.908 13:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:49.848 [2024-11-29 13:04:21.191376] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:49.848 [2024-11-29 13:04:21.191429] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:49.848 [2024-11-29 13:04:21.191504] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:49.848 [2024-11-29 13:04:21.197446] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:49.848 [2024-11-29 13:04:21.251969] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:49.848 [2024-11-29 13:04:21.253198] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b03000:1 started. 00:17:49.848 [2024-11-29 13:04:21.255274] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:49.848 [2024-11-29 13:04:21.255331] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:49.848 [2024-11-29 13:04:21.255360] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:49.848 [2024-11-29 13:04:21.255377] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:49.848 [2024-11-29 13:04:21.255412] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:49.848 [2024-11-29 13:04:21.259796] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b03000 was disconnected and freed. delete nvme_qpair. 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:49.848 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:50.107 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.107 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:50.107 13:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:51.045 13:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:51.982 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.240 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:52.240 13:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:53.175 13:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.367 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:54.367 13:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.355 [2024-11-29 13:04:26.682864] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:55.355 [2024-11-29 13:04:26.682942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.355 [2024-11-29 13:04:26.682976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.355 [2024-11-29 13:04:26.682990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.355 [2024-11-29 13:04:26.683001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.355 [2024-11-29 13:04:26.683021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.355 [2024-11-29 13:04:26.683041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.355 [2024-11-29 13:04:26.683052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.355 [2024-11-29 13:04:26.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.355 [2024-11-29 13:04:26.683071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.355 [2024-11-29 13:04:26.683080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.355 [2024-11-29 13:04:26.683089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf250 is same with the state(6) to be set 00:17:55.355 [2024-11-29 13:04:26.692859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adf250 (9): Bad file descriptor 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:55.355 13:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:55.355 [2024-11-29 13:04:26.702894] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:55.355 [2024-11-29 13:04:26.702935] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:55.355 [2024-11-29 13:04:26.702944] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:55.355 [2024-11-29 13:04:26.702950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:55.355 [2024-11-29 13:04:26.702989] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:56.346 [2024-11-29 13:04:27.740940] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:56.346 [2024-11-29 13:04:27.741034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adf250 with addr=10.0.0.3, port=4420 00:17:56.346 [2024-11-29 13:04:27.741067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf250 is same with the state(6) to be set 00:17:56.346 [2024-11-29 13:04:27.741126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adf250 (9): Bad file descriptor 00:17:56.346 [2024-11-29 13:04:27.741870] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:17:56.346 [2024-11-29 13:04:27.741953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:56.346 [2024-11-29 13:04:27.741976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:56.346 [2024-11-29 13:04:27.741995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:56.346 [2024-11-29 13:04:27.742021] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:56.346 [2024-11-29 13:04:27.742033] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:56.346 [2024-11-29 13:04:27.742041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:56.346 [2024-11-29 13:04:27.742058] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:56.346 [2024-11-29 13:04:27.742068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:56.346 13:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:57.280 [2024-11-29 13:04:28.742162] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:57.280 [2024-11-29 13:04:28.742233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:57.280 [2024-11-29 13:04:28.742269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:57.280 [2024-11-29 13:04:28.742281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:57.280 [2024-11-29 13:04:28.742292] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:57.280 [2024-11-29 13:04:28.742303] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:57.280 [2024-11-29 13:04:28.742310] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:57.280 [2024-11-29 13:04:28.742316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:57.280 [2024-11-29 13:04:28.742364] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:57.280 [2024-11-29 13:04:28.742422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.280 [2024-11-29 13:04:28.742438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.280 [2024-11-29 13:04:28.742453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.280 [2024-11-29 13:04:28.742463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.280 [2024-11-29 13:04:28.742473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.280 [2024-11-29 13:04:28.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.280 [2024-11-29 13:04:28.742522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.280 [2024-11-29 13:04:28.742546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.280 [2024-11-29 13:04:28.742556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.280 [2024-11-29 13:04:28.742564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.280 [2024-11-29 13:04:28.742572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:57.280 [2024-11-29 13:04:28.742685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6aa20 (9): Bad file descriptor 00:17:57.280 [2024-11-29 13:04:28.743699] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:57.280 [2024-11-29 13:04:28.743724] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:57.280 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:57.539 13:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:58.474 13:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:59.411 [2024-11-29 13:04:30.756658] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:59.411 [2024-11-29 13:04:30.756707] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:59.411 [2024-11-29 13:04:30.756728] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:59.411 [2024-11-29 13:04:30.762727] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:59.411 [2024-11-29 13:04:30.817271] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:59.411 [2024-11-29 13:04:30.818237] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1aead80:1 started. 00:17:59.411 [2024-11-29 13:04:30.819626] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:59.411 [2024-11-29 13:04:30.819671] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:59.411 [2024-11-29 13:04:30.819696] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:59.411 [2024-11-29 13:04:30.819713] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:59.411 [2024-11-29 13:04:30.819722] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:59.411 [2024-11-29 13:04:30.825460] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1aead80 was disconnected and freed. delete nvme_qpair. 00:17:59.670 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:59.671 13:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77513 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77513 ']' 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77513 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77513 00:17:59.671 killing process with pid 77513 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77513' 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77513 00:17:59.671 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77513 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.929 rmmod nvme_tcp 00:17:59.929 rmmod nvme_fabrics 00:17:59.929 rmmod nvme_keyring 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77483 ']' 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77483 00:17:59.929 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77483 ']' 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77483 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77483 00:17:59.930 killing process with pid 77483 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77483' 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77483 00:17:59.930 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77483 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.188 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:00.446 00:18:00.446 real 0m13.419s 00:18:00.446 user 0m22.684s 00:18:00.446 sys 0m2.488s 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.446 ************************************ 00:18:00.446 END TEST nvmf_discovery_remove_ifc 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:00.446 ************************************ 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.446 ************************************ 00:18:00.446 START TEST nvmf_identify_kernel_target 00:18:00.446 ************************************ 00:18:00.446 13:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:00.706 * Looking for test storage... 00:18:00.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.706 --rc genhtml_branch_coverage=1 00:18:00.706 --rc genhtml_function_coverage=1 00:18:00.706 --rc genhtml_legend=1 00:18:00.706 --rc geninfo_all_blocks=1 00:18:00.706 --rc geninfo_unexecuted_blocks=1 00:18:00.706 00:18:00.706 ' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.706 --rc genhtml_branch_coverage=1 00:18:00.706 --rc genhtml_function_coverage=1 00:18:00.706 --rc genhtml_legend=1 00:18:00.706 --rc geninfo_all_blocks=1 00:18:00.706 --rc geninfo_unexecuted_blocks=1 00:18:00.706 00:18:00.706 ' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.706 --rc genhtml_branch_coverage=1 00:18:00.706 --rc genhtml_function_coverage=1 00:18:00.706 --rc genhtml_legend=1 00:18:00.706 --rc geninfo_all_blocks=1 00:18:00.706 --rc geninfo_unexecuted_blocks=1 00:18:00.706 00:18:00.706 ' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.706 --rc genhtml_branch_coverage=1 00:18:00.706 --rc genhtml_function_coverage=1 00:18:00.706 --rc genhtml_legend=1 00:18:00.706 --rc geninfo_all_blocks=1 00:18:00.706 --rc geninfo_unexecuted_blocks=1 00:18:00.706 00:18:00.706 ' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.706 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.707 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.965 Cannot find device "nvmf_init_br" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.965 Cannot find device "nvmf_init_br2" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.965 Cannot find device "nvmf_tgt_br" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.965 Cannot find device "nvmf_tgt_br2" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.965 Cannot find device "nvmf_init_br" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.965 Cannot find device "nvmf_init_br2" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.965 Cannot find device "nvmf_tgt_br" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.965 Cannot find device "nvmf_tgt_br2" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.965 Cannot find device "nvmf_br" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.965 Cannot find device "nvmf_init_if" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.965 Cannot find device "nvmf_init_if2" 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.965 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.966 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.224 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:18:01.225 00:18:01.225 --- 10.0.0.3 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:18:01.225 00:18:01.225 --- 10.0.0.4 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:01.225 00:18:01.225 --- 10.0.0.1 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:01.225 00:18:01.225 --- 10.0.0.2 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:01.225 13:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:01.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.742 Waiting for block devices as requested 00:18:01.742 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:01.742 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:01.742 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:02.002 No valid GPT data, bailing 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:02.002 No valid GPT data, bailing 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:02.002 No valid GPT data, bailing 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:02.002 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:02.262 No valid GPT data, bailing 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -a 10.0.0.1 -t tcp -s 4420 00:18:02.262 00:18:02.262 Discovery Log Number of Records 2, Generation counter 2 00:18:02.262 =====Discovery Log Entry 0====== 00:18:02.262 trtype: tcp 00:18:02.262 adrfam: ipv4 00:18:02.262 subtype: current discovery subsystem 00:18:02.262 treq: not specified, sq flow control disable supported 00:18:02.262 portid: 1 00:18:02.262 trsvcid: 4420 00:18:02.262 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:02.262 traddr: 10.0.0.1 00:18:02.262 eflags: none 00:18:02.262 sectype: none 00:18:02.262 =====Discovery Log Entry 1====== 00:18:02.262 trtype: tcp 00:18:02.262 adrfam: ipv4 00:18:02.262 subtype: nvme subsystem 00:18:02.262 treq: not specified, sq flow control disable supported 00:18:02.262 portid: 1 00:18:02.262 trsvcid: 4420 00:18:02.262 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:02.262 traddr: 10.0.0.1 00:18:02.262 eflags: none 00:18:02.262 sectype: none 00:18:02.262 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:02.262 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:02.522 ===================================================== 00:18:02.522 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:02.522 ===================================================== 00:18:02.522 Controller Capabilities/Features 00:18:02.522 ================================ 00:18:02.522 Vendor ID: 0000 00:18:02.522 Subsystem Vendor ID: 0000 00:18:02.522 Serial Number: 2f021b2a2424247d8d3f 00:18:02.522 Model Number: Linux 00:18:02.522 Firmware Version: 6.8.9-20 00:18:02.522 Recommended Arb Burst: 0 00:18:02.522 IEEE OUI Identifier: 00 00 00 00:18:02.522 Multi-path I/O 00:18:02.522 May have multiple subsystem ports: No 00:18:02.522 May have multiple controllers: No 00:18:02.522 Associated with SR-IOV VF: No 00:18:02.522 Max Data Transfer Size: Unlimited 00:18:02.522 Max Number of Namespaces: 0 00:18:02.522 Max Number of I/O Queues: 1024 00:18:02.522 NVMe Specification Version (VS): 1.3 00:18:02.522 NVMe Specification Version (Identify): 1.3 00:18:02.522 Maximum Queue Entries: 1024 00:18:02.522 Contiguous Queues Required: No 00:18:02.522 Arbitration Mechanisms Supported 00:18:02.522 Weighted Round Robin: Not Supported 00:18:02.522 Vendor Specific: Not Supported 00:18:02.522 Reset Timeout: 7500 ms 00:18:02.522 Doorbell Stride: 4 bytes 00:18:02.522 NVM Subsystem Reset: Not Supported 00:18:02.522 Command Sets Supported 00:18:02.522 NVM Command Set: Supported 00:18:02.522 Boot Partition: Not Supported 00:18:02.522 Memory Page Size Minimum: 4096 bytes 00:18:02.522 Memory Page Size Maximum: 4096 bytes 00:18:02.522 Persistent Memory Region: Not Supported 00:18:02.523 Optional Asynchronous Events Supported 00:18:02.523 Namespace Attribute Notices: Not Supported 00:18:02.523 Firmware Activation Notices: Not Supported 00:18:02.523 ANA Change Notices: Not Supported 00:18:02.523 PLE Aggregate Log Change Notices: Not Supported 00:18:02.523 LBA Status Info Alert Notices: Not Supported 00:18:02.523 EGE Aggregate Log Change Notices: Not Supported 00:18:02.523 Normal NVM Subsystem Shutdown event: Not Supported 00:18:02.523 Zone Descriptor Change Notices: Not Supported 00:18:02.523 Discovery Log Change Notices: Supported 00:18:02.523 Controller Attributes 00:18:02.523 128-bit Host Identifier: Not Supported 00:18:02.523 Non-Operational Permissive Mode: Not Supported 00:18:02.523 NVM Sets: Not Supported 00:18:02.523 Read Recovery Levels: Not Supported 00:18:02.523 Endurance Groups: Not Supported 00:18:02.523 Predictable Latency Mode: Not Supported 00:18:02.523 Traffic Based Keep ALive: Not Supported 00:18:02.523 Namespace Granularity: Not Supported 00:18:02.523 SQ Associations: Not Supported 00:18:02.523 UUID List: Not Supported 00:18:02.523 Multi-Domain Subsystem: Not Supported 00:18:02.523 Fixed Capacity Management: Not Supported 00:18:02.523 Variable Capacity Management: Not Supported 00:18:02.523 Delete Endurance Group: Not Supported 00:18:02.523 Delete NVM Set: Not Supported 00:18:02.523 Extended LBA Formats Supported: Not Supported 00:18:02.523 Flexible Data Placement Supported: Not Supported 00:18:02.523 00:18:02.523 Controller Memory Buffer Support 00:18:02.523 ================================ 00:18:02.523 Supported: No 00:18:02.523 00:18:02.523 Persistent Memory Region Support 00:18:02.523 ================================ 00:18:02.523 Supported: No 00:18:02.523 00:18:02.523 Admin Command Set Attributes 00:18:02.523 ============================ 00:18:02.523 Security Send/Receive: Not Supported 00:18:02.523 Format NVM: Not Supported 00:18:02.523 Firmware Activate/Download: Not Supported 00:18:02.523 Namespace Management: Not Supported 00:18:02.523 Device Self-Test: Not Supported 00:18:02.523 Directives: Not Supported 00:18:02.523 NVMe-MI: Not Supported 00:18:02.523 Virtualization Management: Not Supported 00:18:02.523 Doorbell Buffer Config: Not Supported 00:18:02.523 Get LBA Status Capability: Not Supported 00:18:02.523 Command & Feature Lockdown Capability: Not Supported 00:18:02.523 Abort Command Limit: 1 00:18:02.523 Async Event Request Limit: 1 00:18:02.523 Number of Firmware Slots: N/A 00:18:02.523 Firmware Slot 1 Read-Only: N/A 00:18:02.523 Firmware Activation Without Reset: N/A 00:18:02.523 Multiple Update Detection Support: N/A 00:18:02.523 Firmware Update Granularity: No Information Provided 00:18:02.523 Per-Namespace SMART Log: No 00:18:02.523 Asymmetric Namespace Access Log Page: Not Supported 00:18:02.523 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:02.523 Command Effects Log Page: Not Supported 00:18:02.523 Get Log Page Extended Data: Supported 00:18:02.523 Telemetry Log Pages: Not Supported 00:18:02.523 Persistent Event Log Pages: Not Supported 00:18:02.523 Supported Log Pages Log Page: May Support 00:18:02.523 Commands Supported & Effects Log Page: Not Supported 00:18:02.523 Feature Identifiers & Effects Log Page:May Support 00:18:02.523 NVMe-MI Commands & Effects Log Page: May Support 00:18:02.523 Data Area 4 for Telemetry Log: Not Supported 00:18:02.523 Error Log Page Entries Supported: 1 00:18:02.523 Keep Alive: Not Supported 00:18:02.523 00:18:02.523 NVM Command Set Attributes 00:18:02.523 ========================== 00:18:02.523 Submission Queue Entry Size 00:18:02.523 Max: 1 00:18:02.523 Min: 1 00:18:02.523 Completion Queue Entry Size 00:18:02.523 Max: 1 00:18:02.523 Min: 1 00:18:02.523 Number of Namespaces: 0 00:18:02.523 Compare Command: Not Supported 00:18:02.523 Write Uncorrectable Command: Not Supported 00:18:02.523 Dataset Management Command: Not Supported 00:18:02.523 Write Zeroes Command: Not Supported 00:18:02.523 Set Features Save Field: Not Supported 00:18:02.523 Reservations: Not Supported 00:18:02.523 Timestamp: Not Supported 00:18:02.523 Copy: Not Supported 00:18:02.523 Volatile Write Cache: Not Present 00:18:02.523 Atomic Write Unit (Normal): 1 00:18:02.523 Atomic Write Unit (PFail): 1 00:18:02.523 Atomic Compare & Write Unit: 1 00:18:02.523 Fused Compare & Write: Not Supported 00:18:02.523 Scatter-Gather List 00:18:02.523 SGL Command Set: Supported 00:18:02.523 SGL Keyed: Not Supported 00:18:02.523 SGL Bit Bucket Descriptor: Not Supported 00:18:02.523 SGL Metadata Pointer: Not Supported 00:18:02.523 Oversized SGL: Not Supported 00:18:02.523 SGL Metadata Address: Not Supported 00:18:02.523 SGL Offset: Supported 00:18:02.523 Transport SGL Data Block: Not Supported 00:18:02.523 Replay Protected Memory Block: Not Supported 00:18:02.523 00:18:02.523 Firmware Slot Information 00:18:02.523 ========================= 00:18:02.523 Active slot: 0 00:18:02.523 00:18:02.523 00:18:02.523 Error Log 00:18:02.523 ========= 00:18:02.523 00:18:02.523 Active Namespaces 00:18:02.523 ================= 00:18:02.523 Discovery Log Page 00:18:02.523 ================== 00:18:02.523 Generation Counter: 2 00:18:02.523 Number of Records: 2 00:18:02.523 Record Format: 0 00:18:02.523 00:18:02.523 Discovery Log Entry 0 00:18:02.523 ---------------------- 00:18:02.523 Transport Type: 3 (TCP) 00:18:02.523 Address Family: 1 (IPv4) 00:18:02.523 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:02.523 Entry Flags: 00:18:02.523 Duplicate Returned Information: 0 00:18:02.523 Explicit Persistent Connection Support for Discovery: 0 00:18:02.523 Transport Requirements: 00:18:02.523 Secure Channel: Not Specified 00:18:02.523 Port ID: 1 (0x0001) 00:18:02.523 Controller ID: 65535 (0xffff) 00:18:02.523 Admin Max SQ Size: 32 00:18:02.523 Transport Service Identifier: 4420 00:18:02.523 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:02.523 Transport Address: 10.0.0.1 00:18:02.523 Discovery Log Entry 1 00:18:02.523 ---------------------- 00:18:02.523 Transport Type: 3 (TCP) 00:18:02.523 Address Family: 1 (IPv4) 00:18:02.523 Subsystem Type: 2 (NVM Subsystem) 00:18:02.523 Entry Flags: 00:18:02.523 Duplicate Returned Information: 0 00:18:02.523 Explicit Persistent Connection Support for Discovery: 0 00:18:02.523 Transport Requirements: 00:18:02.523 Secure Channel: Not Specified 00:18:02.523 Port ID: 1 (0x0001) 00:18:02.523 Controller ID: 65535 (0xffff) 00:18:02.523 Admin Max SQ Size: 32 00:18:02.523 Transport Service Identifier: 4420 00:18:02.524 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:02.524 Transport Address: 10.0.0.1 00:18:02.524 13:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:02.524 get_feature(0x01) failed 00:18:02.524 get_feature(0x02) failed 00:18:02.524 get_feature(0x04) failed 00:18:02.524 ===================================================== 00:18:02.524 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:02.524 ===================================================== 00:18:02.524 Controller Capabilities/Features 00:18:02.524 ================================ 00:18:02.524 Vendor ID: 0000 00:18:02.524 Subsystem Vendor ID: 0000 00:18:02.524 Serial Number: ef448ef7a17881d825e1 00:18:02.524 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:02.524 Firmware Version: 6.8.9-20 00:18:02.524 Recommended Arb Burst: 6 00:18:02.524 IEEE OUI Identifier: 00 00 00 00:18:02.524 Multi-path I/O 00:18:02.524 May have multiple subsystem ports: Yes 00:18:02.524 May have multiple controllers: Yes 00:18:02.524 Associated with SR-IOV VF: No 00:18:02.524 Max Data Transfer Size: Unlimited 00:18:02.524 Max Number of Namespaces: 1024 00:18:02.524 Max Number of I/O Queues: 128 00:18:02.524 NVMe Specification Version (VS): 1.3 00:18:02.524 NVMe Specification Version (Identify): 1.3 00:18:02.524 Maximum Queue Entries: 1024 00:18:02.524 Contiguous Queues Required: No 00:18:02.524 Arbitration Mechanisms Supported 00:18:02.524 Weighted Round Robin: Not Supported 00:18:02.524 Vendor Specific: Not Supported 00:18:02.524 Reset Timeout: 7500 ms 00:18:02.524 Doorbell Stride: 4 bytes 00:18:02.524 NVM Subsystem Reset: Not Supported 00:18:02.524 Command Sets Supported 00:18:02.524 NVM Command Set: Supported 00:18:02.524 Boot Partition: Not Supported 00:18:02.524 Memory Page Size Minimum: 4096 bytes 00:18:02.524 Memory Page Size Maximum: 4096 bytes 00:18:02.524 Persistent Memory Region: Not Supported 00:18:02.524 Optional Asynchronous Events Supported 00:18:02.524 Namespace Attribute Notices: Supported 00:18:02.524 Firmware Activation Notices: Not Supported 00:18:02.524 ANA Change Notices: Supported 00:18:02.524 PLE Aggregate Log Change Notices: Not Supported 00:18:02.524 LBA Status Info Alert Notices: Not Supported 00:18:02.524 EGE Aggregate Log Change Notices: Not Supported 00:18:02.524 Normal NVM Subsystem Shutdown event: Not Supported 00:18:02.524 Zone Descriptor Change Notices: Not Supported 00:18:02.524 Discovery Log Change Notices: Not Supported 00:18:02.524 Controller Attributes 00:18:02.524 128-bit Host Identifier: Supported 00:18:02.524 Non-Operational Permissive Mode: Not Supported 00:18:02.524 NVM Sets: Not Supported 00:18:02.524 Read Recovery Levels: Not Supported 00:18:02.524 Endurance Groups: Not Supported 00:18:02.524 Predictable Latency Mode: Not Supported 00:18:02.524 Traffic Based Keep ALive: Supported 00:18:02.524 Namespace Granularity: Not Supported 00:18:02.524 SQ Associations: Not Supported 00:18:02.524 UUID List: Not Supported 00:18:02.524 Multi-Domain Subsystem: Not Supported 00:18:02.524 Fixed Capacity Management: Not Supported 00:18:02.524 Variable Capacity Management: Not Supported 00:18:02.524 Delete Endurance Group: Not Supported 00:18:02.524 Delete NVM Set: Not Supported 00:18:02.524 Extended LBA Formats Supported: Not Supported 00:18:02.524 Flexible Data Placement Supported: Not Supported 00:18:02.524 00:18:02.524 Controller Memory Buffer Support 00:18:02.524 ================================ 00:18:02.524 Supported: No 00:18:02.524 00:18:02.524 Persistent Memory Region Support 00:18:02.524 ================================ 00:18:02.524 Supported: No 00:18:02.524 00:18:02.524 Admin Command Set Attributes 00:18:02.524 ============================ 00:18:02.524 Security Send/Receive: Not Supported 00:18:02.524 Format NVM: Not Supported 00:18:02.524 Firmware Activate/Download: Not Supported 00:18:02.524 Namespace Management: Not Supported 00:18:02.524 Device Self-Test: Not Supported 00:18:02.524 Directives: Not Supported 00:18:02.524 NVMe-MI: Not Supported 00:18:02.524 Virtualization Management: Not Supported 00:18:02.524 Doorbell Buffer Config: Not Supported 00:18:02.524 Get LBA Status Capability: Not Supported 00:18:02.524 Command & Feature Lockdown Capability: Not Supported 00:18:02.524 Abort Command Limit: 4 00:18:02.524 Async Event Request Limit: 4 00:18:02.524 Number of Firmware Slots: N/A 00:18:02.524 Firmware Slot 1 Read-Only: N/A 00:18:02.524 Firmware Activation Without Reset: N/A 00:18:02.524 Multiple Update Detection Support: N/A 00:18:02.524 Firmware Update Granularity: No Information Provided 00:18:02.524 Per-Namespace SMART Log: Yes 00:18:02.524 Asymmetric Namespace Access Log Page: Supported 00:18:02.524 ANA Transition Time : 10 sec 00:18:02.524 00:18:02.524 Asymmetric Namespace Access Capabilities 00:18:02.524 ANA Optimized State : Supported 00:18:02.524 ANA Non-Optimized State : Supported 00:18:02.524 ANA Inaccessible State : Supported 00:18:02.524 ANA Persistent Loss State : Supported 00:18:02.524 ANA Change State : Supported 00:18:02.524 ANAGRPID is not changed : No 00:18:02.524 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:02.524 00:18:02.524 ANA Group Identifier Maximum : 128 00:18:02.524 Number of ANA Group Identifiers : 128 00:18:02.524 Max Number of Allowed Namespaces : 1024 00:18:02.524 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:02.524 Command Effects Log Page: Supported 00:18:02.524 Get Log Page Extended Data: Supported 00:18:02.524 Telemetry Log Pages: Not Supported 00:18:02.524 Persistent Event Log Pages: Not Supported 00:18:02.524 Supported Log Pages Log Page: May Support 00:18:02.524 Commands Supported & Effects Log Page: Not Supported 00:18:02.524 Feature Identifiers & Effects Log Page:May Support 00:18:02.524 NVMe-MI Commands & Effects Log Page: May Support 00:18:02.524 Data Area 4 for Telemetry Log: Not Supported 00:18:02.524 Error Log Page Entries Supported: 128 00:18:02.524 Keep Alive: Supported 00:18:02.525 Keep Alive Granularity: 1000 ms 00:18:02.525 00:18:02.525 NVM Command Set Attributes 00:18:02.525 ========================== 00:18:02.525 Submission Queue Entry Size 00:18:02.525 Max: 64 00:18:02.525 Min: 64 00:18:02.525 Completion Queue Entry Size 00:18:02.525 Max: 16 00:18:02.525 Min: 16 00:18:02.525 Number of Namespaces: 1024 00:18:02.525 Compare Command: Not Supported 00:18:02.525 Write Uncorrectable Command: Not Supported 00:18:02.525 Dataset Management Command: Supported 00:18:02.525 Write Zeroes Command: Supported 00:18:02.525 Set Features Save Field: Not Supported 00:18:02.525 Reservations: Not Supported 00:18:02.525 Timestamp: Not Supported 00:18:02.525 Copy: Not Supported 00:18:02.525 Volatile Write Cache: Present 00:18:02.525 Atomic Write Unit (Normal): 1 00:18:02.525 Atomic Write Unit (PFail): 1 00:18:02.525 Atomic Compare & Write Unit: 1 00:18:02.525 Fused Compare & Write: Not Supported 00:18:02.525 Scatter-Gather List 00:18:02.525 SGL Command Set: Supported 00:18:02.525 SGL Keyed: Not Supported 00:18:02.525 SGL Bit Bucket Descriptor: Not Supported 00:18:02.525 SGL Metadata Pointer: Not Supported 00:18:02.525 Oversized SGL: Not Supported 00:18:02.525 SGL Metadata Address: Not Supported 00:18:02.525 SGL Offset: Supported 00:18:02.525 Transport SGL Data Block: Not Supported 00:18:02.525 Replay Protected Memory Block: Not Supported 00:18:02.525 00:18:02.525 Firmware Slot Information 00:18:02.525 ========================= 00:18:02.525 Active slot: 0 00:18:02.525 00:18:02.525 Asymmetric Namespace Access 00:18:02.525 =========================== 00:18:02.525 Change Count : 0 00:18:02.525 Number of ANA Group Descriptors : 1 00:18:02.525 ANA Group Descriptor : 0 00:18:02.525 ANA Group ID : 1 00:18:02.525 Number of NSID Values : 1 00:18:02.525 Change Count : 0 00:18:02.525 ANA State : 1 00:18:02.525 Namespace Identifier : 1 00:18:02.525 00:18:02.525 Commands Supported and Effects 00:18:02.525 ============================== 00:18:02.525 Admin Commands 00:18:02.525 -------------- 00:18:02.525 Get Log Page (02h): Supported 00:18:02.525 Identify (06h): Supported 00:18:02.525 Abort (08h): Supported 00:18:02.525 Set Features (09h): Supported 00:18:02.525 Get Features (0Ah): Supported 00:18:02.525 Asynchronous Event Request (0Ch): Supported 00:18:02.525 Keep Alive (18h): Supported 00:18:02.525 I/O Commands 00:18:02.525 ------------ 00:18:02.525 Flush (00h): Supported 00:18:02.525 Write (01h): Supported LBA-Change 00:18:02.525 Read (02h): Supported 00:18:02.525 Write Zeroes (08h): Supported LBA-Change 00:18:02.525 Dataset Management (09h): Supported 00:18:02.525 00:18:02.525 Error Log 00:18:02.525 ========= 00:18:02.525 Entry: 0 00:18:02.525 Error Count: 0x3 00:18:02.525 Submission Queue Id: 0x0 00:18:02.525 Command Id: 0x5 00:18:02.525 Phase Bit: 0 00:18:02.525 Status Code: 0x2 00:18:02.525 Status Code Type: 0x0 00:18:02.525 Do Not Retry: 1 00:18:02.525 Error Location: 0x28 00:18:02.525 LBA: 0x0 00:18:02.525 Namespace: 0x0 00:18:02.525 Vendor Log Page: 0x0 00:18:02.525 ----------- 00:18:02.525 Entry: 1 00:18:02.525 Error Count: 0x2 00:18:02.525 Submission Queue Id: 0x0 00:18:02.525 Command Id: 0x5 00:18:02.525 Phase Bit: 0 00:18:02.525 Status Code: 0x2 00:18:02.525 Status Code Type: 0x0 00:18:02.525 Do Not Retry: 1 00:18:02.525 Error Location: 0x28 00:18:02.525 LBA: 0x0 00:18:02.525 Namespace: 0x0 00:18:02.525 Vendor Log Page: 0x0 00:18:02.525 ----------- 00:18:02.525 Entry: 2 00:18:02.525 Error Count: 0x1 00:18:02.525 Submission Queue Id: 0x0 00:18:02.525 Command Id: 0x4 00:18:02.525 Phase Bit: 0 00:18:02.525 Status Code: 0x2 00:18:02.525 Status Code Type: 0x0 00:18:02.525 Do Not Retry: 1 00:18:02.525 Error Location: 0x28 00:18:02.525 LBA: 0x0 00:18:02.525 Namespace: 0x0 00:18:02.525 Vendor Log Page: 0x0 00:18:02.525 00:18:02.525 Number of Queues 00:18:02.525 ================ 00:18:02.525 Number of I/O Submission Queues: 128 00:18:02.525 Number of I/O Completion Queues: 128 00:18:02.525 00:18:02.525 ZNS Specific Controller Data 00:18:02.525 ============================ 00:18:02.525 Zone Append Size Limit: 0 00:18:02.525 00:18:02.525 00:18:02.525 Active Namespaces 00:18:02.525 ================= 00:18:02.525 get_feature(0x05) failed 00:18:02.525 Namespace ID:1 00:18:02.525 Command Set Identifier: NVM (00h) 00:18:02.525 Deallocate: Supported 00:18:02.525 Deallocated/Unwritten Error: Not Supported 00:18:02.525 Deallocated Read Value: Unknown 00:18:02.525 Deallocate in Write Zeroes: Not Supported 00:18:02.525 Deallocated Guard Field: 0xFFFF 00:18:02.525 Flush: Supported 00:18:02.525 Reservation: Not Supported 00:18:02.525 Namespace Sharing Capabilities: Multiple Controllers 00:18:02.525 Size (in LBAs): 1310720 (5GiB) 00:18:02.525 Capacity (in LBAs): 1310720 (5GiB) 00:18:02.525 Utilization (in LBAs): 1310720 (5GiB) 00:18:02.525 UUID: 285408aa-dc9b-4294-9483-56db0d5c1247 00:18:02.525 Thin Provisioning: Not Supported 00:18:02.525 Per-NS Atomic Units: Yes 00:18:02.525 Atomic Boundary Size (Normal): 0 00:18:02.525 Atomic Boundary Size (PFail): 0 00:18:02.525 Atomic Boundary Offset: 0 00:18:02.525 NGUID/EUI64 Never Reused: No 00:18:02.525 ANA group ID: 1 00:18:02.525 Namespace Write Protected: No 00:18:02.525 Number of LBA Formats: 1 00:18:02.525 Current LBA Format: LBA Format #00 00:18:02.525 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:02.525 00:18:02.525 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:02.525 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.525 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.785 rmmod nvme_tcp 00:18:02.785 rmmod nvme_fabrics 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:02.785 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:02.786 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:03.045 13:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:03.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:03.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.898 ************************************ 00:18:03.898 END TEST nvmf_identify_kernel_target 00:18:03.898 ************************************ 00:18:03.898 00:18:03.898 real 0m3.368s 00:18:03.898 user 0m1.244s 00:18:03.898 sys 0m1.449s 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.898 ************************************ 00:18:03.898 START TEST nvmf_auth_host 00:18:03.898 ************************************ 00:18:03.898 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:04.157 * Looking for test storage... 00:18:04.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.157 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.158 --rc genhtml_branch_coverage=1 00:18:04.158 --rc genhtml_function_coverage=1 00:18:04.158 --rc genhtml_legend=1 00:18:04.158 --rc geninfo_all_blocks=1 00:18:04.158 --rc geninfo_unexecuted_blocks=1 00:18:04.158 00:18:04.158 ' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.158 --rc genhtml_branch_coverage=1 00:18:04.158 --rc genhtml_function_coverage=1 00:18:04.158 --rc genhtml_legend=1 00:18:04.158 --rc geninfo_all_blocks=1 00:18:04.158 --rc geninfo_unexecuted_blocks=1 00:18:04.158 00:18:04.158 ' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.158 --rc genhtml_branch_coverage=1 00:18:04.158 --rc genhtml_function_coverage=1 00:18:04.158 --rc genhtml_legend=1 00:18:04.158 --rc geninfo_all_blocks=1 00:18:04.158 --rc geninfo_unexecuted_blocks=1 00:18:04.158 00:18:04.158 ' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.158 --rc genhtml_branch_coverage=1 00:18:04.158 --rc genhtml_function_coverage=1 00:18:04.158 --rc genhtml_legend=1 00:18:04.158 --rc geninfo_all_blocks=1 00:18:04.158 --rc geninfo_unexecuted_blocks=1 00:18:04.158 00:18:04.158 ' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.158 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:04.159 Cannot find device "nvmf_init_br" 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:04.159 Cannot find device "nvmf_init_br2" 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:04.159 Cannot find device "nvmf_tgt_br" 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.159 Cannot find device "nvmf_tgt_br2" 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:04.159 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:04.418 Cannot find device "nvmf_init_br" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:04.418 Cannot find device "nvmf_init_br2" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:04.418 Cannot find device "nvmf_tgt_br" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:04.418 Cannot find device "nvmf_tgt_br2" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:04.418 Cannot find device "nvmf_br" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:04.418 Cannot find device "nvmf_init_if" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:04.418 Cannot find device "nvmf_init_if2" 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.418 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:04.419 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.678 13:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:04.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:18:04.678 00:18:04.678 --- 10.0.0.3 ping statistics --- 00:18:04.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.678 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:04.678 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:04.678 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:04.678 00:18:04.678 --- 10.0.0.4 ping statistics --- 00:18:04.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.678 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:04.678 00:18:04.678 --- 10.0.0.1 ping statistics --- 00:18:04.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.678 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:04.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:04.678 00:18:04.678 --- 10.0.0.2 ping statistics --- 00:18:04.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.678 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78505 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78505 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78505 ']' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.678 13:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7fd20a565a9db72e1852825eaba784f 00:18:06.057 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AXY 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7fd20a565a9db72e1852825eaba784f 0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7fd20a565a9db72e1852825eaba784f 0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7fd20a565a9db72e1852825eaba784f 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AXY 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AXY 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.AXY 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fb43030a3beab13db13ee6870b8c311196aa3df681da3fa195d2410e85d98f99 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zyC 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fb43030a3beab13db13ee6870b8c311196aa3df681da3fa195d2410e85d98f99 3 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fb43030a3beab13db13ee6870b8c311196aa3df681da3fa195d2410e85d98f99 3 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fb43030a3beab13db13ee6870b8c311196aa3df681da3fa195d2410e85d98f99 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zyC 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zyC 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.zyC 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72778dea601a483195ea8a629656fa836fe72644ffee0471 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ugl 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72778dea601a483195ea8a629656fa836fe72644ffee0471 0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72778dea601a483195ea8a629656fa836fe72644ffee0471 0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72778dea601a483195ea8a629656fa836fe72644ffee0471 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ugl 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ugl 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Ugl 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d3d7acf8f33ab494d4d41256584b566ceca70edd8fa9407 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dnj 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d3d7acf8f33ab494d4d41256584b566ceca70edd8fa9407 2 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d3d7acf8f33ab494d4d41256584b566ceca70edd8fa9407 2 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d3d7acf8f33ab494d4d41256584b566ceca70edd8fa9407 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dnj 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dnj 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dnj 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4cabc131db6a3c287be4e67159d05b5c 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZYp 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4cabc131db6a3c287be4e67159d05b5c 1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4cabc131db6a3c287be4e67159d05b5c 1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4cabc131db6a3c287be4e67159d05b5c 00:18:06.058 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:06.059 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZYp 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZYp 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZYp 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22889280693dcfc7d8e86ea127908135 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZpG 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22889280693dcfc7d8e86ea127908135 1 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22889280693dcfc7d8e86ea127908135 1 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22889280693dcfc7d8e86ea127908135 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZpG 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZpG 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZpG 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.317 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=02fc7fca5eed9f140dc8795badb6b10859f0fa2b36058473 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8XI 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 02fc7fca5eed9f140dc8795badb6b10859f0fa2b36058473 2 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 02fc7fca5eed9f140dc8795badb6b10859f0fa2b36058473 2 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=02fc7fca5eed9f140dc8795badb6b10859f0fa2b36058473 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8XI 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8XI 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8XI 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0680526c9cba5fcdf1936306786ae35b 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XJA 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0680526c9cba5fcdf1936306786ae35b 0 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0680526c9cba5fcdf1936306786ae35b 0 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0680526c9cba5fcdf1936306786ae35b 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XJA 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XJA 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XJA 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=296c5a88be145ba3840a2b7c8ad91f3d162cc5eb14d50cec7b4319cfacb2b7b7 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cQk 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 296c5a88be145ba3840a2b7c8ad91f3d162cc5eb14d50cec7b4319cfacb2b7b7 3 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 296c5a88be145ba3840a2b7c8ad91f3d162cc5eb14d50cec7b4319cfacb2b7b7 3 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=296c5a88be145ba3840a2b7c8ad91f3d162cc5eb14d50cec7b4319cfacb2b7b7 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:06.318 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cQk 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cQk 00:18:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cQk 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78505 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78505 ']' 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.576 13:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AXY 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.zyC ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zyC 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Ugl 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dnj ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dnj 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZYp 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZpG ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZpG 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8XI 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.835 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XJA ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XJA 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cQk 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:06.836 13:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:07.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.402 Waiting for block devices as requested 00:18:07.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:07.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:07.969 No valid GPT data, bailing 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:07.969 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:08.229 No valid GPT data, bailing 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:08.229 No valid GPT data, bailing 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:08.229 No valid GPT data, bailing 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:08.229 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 --hostid=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 -a 10.0.0.1 -t tcp -s 4420 00:18:08.487 00:18:08.487 Discovery Log Number of Records 2, Generation counter 2 00:18:08.487 =====Discovery Log Entry 0====== 00:18:08.487 trtype: tcp 00:18:08.487 adrfam: ipv4 00:18:08.487 subtype: current discovery subsystem 00:18:08.487 treq: not specified, sq flow control disable supported 00:18:08.487 portid: 1 00:18:08.487 trsvcid: 4420 00:18:08.487 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:08.487 traddr: 10.0.0.1 00:18:08.487 eflags: none 00:18:08.487 sectype: none 00:18:08.487 =====Discovery Log Entry 1====== 00:18:08.487 trtype: tcp 00:18:08.487 adrfam: ipv4 00:18:08.487 subtype: nvme subsystem 00:18:08.487 treq: not specified, sq flow control disable supported 00:18:08.487 portid: 1 00:18:08.487 trsvcid: 4420 00:18:08.487 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:08.487 traddr: 10.0.0.1 00:18:08.487 eflags: none 00:18:08.487 sectype: none 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.488 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.488 nvme0n1 00:18:08.747 13:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.747 nvme0n1 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.747 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.006 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.006 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.006 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:09.006 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 nvme0n1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.007 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.266 nvme0n1 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:09.266 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.267 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 nvme0n1 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 nvme0n1 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 13:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.526 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.526 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.785 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.043 nvme0n1 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.043 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 nvme0n1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.562 nvme0n1 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:10.562 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.563 13:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.563 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.821 nvme0n1 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.821 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.822 nvme0n1 00:18:10.822 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.081 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.648 13:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.648 nvme0n1 00:18:11.648 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.648 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.648 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.648 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.648 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 nvme0n1 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.166 nvme0n1 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.166 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.427 nvme0n1 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.427 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.695 13:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.695 nvme0n1 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.695 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.954 13:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.854 13:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.854 nvme0n1 00:18:14.854 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.854 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.855 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.113 nvme0n1 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.113 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.372 13:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 nvme0n1 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.631 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 nvme0n1 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.457 nvme0n1 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.457 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.716 13:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 nvme0n1 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.281 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.282 13:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.847 nvme0n1 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.847 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.848 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 nvme0n1 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 13:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.783 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.350 nvme0n1 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:19.350 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.351 13:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.917 nvme0n1 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.917 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 nvme0n1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.435 nvme0n1 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.435 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.436 nvme0n1 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.436 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.695 13:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.695 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.696 nvme0n1 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.696 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.955 nvme0n1 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.955 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 nvme0n1 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.214 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.215 nvme0n1 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.215 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.473 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.474 nvme0n1 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.474 13:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 nvme0n1 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:21.732 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.733 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.991 nvme0n1 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.991 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.250 nvme0n1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.250 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.508 nvme0n1 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.508 13:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 nvme0n1 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.790 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.064 nvme0n1 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.064 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.322 nvme0n1 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.322 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.580 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.581 13:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.839 nvme0n1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.406 nvme0n1 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.406 13:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.664 nvme0n1 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.665 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.923 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 nvme0n1 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.181 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.182 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.747 nvme0n1 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.747 13:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.747 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.748 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.318 nvme0n1 00:18:26.318 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.318 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.318 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.318 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.319 13:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.891 nvme0n1 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.891 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.149 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.150 13:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.716 nvme0n1 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.716 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.283 nvme0n1 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.283 13:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.851 nvme0n1 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.851 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 nvme0n1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.110 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.111 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 nvme0n1 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 nvme0n1 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.369 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.628 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.629 nvme0n1 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.629 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.887 nvme0n1 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.887 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.888 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.146 nvme0n1 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.146 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.147 nvme0n1 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.147 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 nvme0n1 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.406 13:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.665 nvme0n1 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:30.665 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.666 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.924 nvme0n1 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:30.924 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.925 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 nvme0n1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.184 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.444 nvme0n1 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.444 13:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.702 nvme0n1 00:18:31.702 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.702 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.702 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.703 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.961 nvme0n1 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.961 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.962 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.221 nvme0n1 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.221 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.222 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.480 nvme0n1 00:18:32.480 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.480 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.480 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.480 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.480 13:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.738 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.996 nvme0n1 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.996 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.997 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.564 nvme0n1 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.564 13:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.822 nvme0n1 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.822 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.389 nvme0n1 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZDIwYTU2NWE5ZGI3MmUxODUyODI1ZWFiYTc4NGYHnesH: 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmI0MzAzMGEzYmVhYjEzZGIxM2VlNjg3MGI4YzMxMTE5NmFhM2RmNjgxZGEzZmExOTVkMjQxMGU4NWQ5OGY5Oep0wGA=: 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.389 13:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.957 nvme0n1 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.957 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.958 13:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 nvme0n1 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.894 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.462 nvme0n1 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:36.462 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDJmYzdmY2E1ZWVkOWYxNDBkYzg3OTViYWRiNmIxMDg1OWYwZmEyYjM2MDU4NDczRj2Cyw==: 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: ]] 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDY4MDUyNmM5Y2JhNWZjZGYxOTM2MzA2Nzg2YWUzNWJHJ6K2: 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.463 13:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.030 nvme0n1 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2YzVhODhiZTE0NWJhMzg0MGEyYjdjOGFkOTFmM2QxNjJjYzVlYjE0ZDUwY2VjN2I0MzE5Y2ZhY2IyYjdiN5mdgcg=: 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.030 13:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.972 nvme0n1 00:18:37.972 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.972 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.972 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 request: 00:18:37.973 { 00:18:37.973 "name": "nvme0", 00:18:37.973 "trtype": "tcp", 00:18:37.973 "traddr": "10.0.0.1", 00:18:37.973 "adrfam": "ipv4", 00:18:37.973 "trsvcid": "4420", 00:18:37.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:37.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:37.973 "prchk_reftag": false, 00:18:37.973 "prchk_guard": false, 00:18:37.973 "hdgst": false, 00:18:37.973 "ddgst": false, 00:18:37.973 "allow_unrecognized_csi": false, 00:18:37.973 "method": "bdev_nvme_attach_controller", 00:18:37.973 "req_id": 1 00:18:37.973 } 00:18:37.973 Got JSON-RPC error response 00:18:37.973 response: 00:18:37.973 { 00:18:37.973 "code": -5, 00:18:37.973 "message": "Input/output error" 00:18:37.973 } 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 request: 00:18:37.973 { 00:18:37.973 "name": "nvme0", 00:18:37.973 "trtype": "tcp", 00:18:37.973 "traddr": "10.0.0.1", 00:18:37.973 "adrfam": "ipv4", 00:18:37.973 "trsvcid": "4420", 00:18:37.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:37.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:37.973 "prchk_reftag": false, 00:18:37.973 "prchk_guard": false, 00:18:37.973 "hdgst": false, 00:18:37.973 "ddgst": false, 00:18:37.973 "dhchap_key": "key2", 00:18:37.973 "allow_unrecognized_csi": false, 00:18:37.973 "method": "bdev_nvme_attach_controller", 00:18:37.973 "req_id": 1 00:18:37.973 } 00:18:37.973 Got JSON-RPC error response 00:18:37.973 response: 00:18:37.973 { 00:18:37.973 "code": -5, 00:18:37.973 "message": "Input/output error" 00:18:37.973 } 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.973 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.974 request: 00:18:37.974 { 00:18:37.974 "name": "nvme0", 00:18:37.974 "trtype": "tcp", 00:18:37.974 "traddr": "10.0.0.1", 00:18:37.974 "adrfam": "ipv4", 00:18:37.974 "trsvcid": "4420", 00:18:37.974 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:37.974 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:37.974 "prchk_reftag": false, 00:18:37.974 "prchk_guard": false, 00:18:37.974 "hdgst": false, 00:18:37.974 "ddgst": false, 00:18:37.974 "dhchap_key": "key1", 00:18:37.974 "dhchap_ctrlr_key": "ckey2", 00:18:37.974 "allow_unrecognized_csi": false, 00:18:37.974 "method": "bdev_nvme_attach_controller", 00:18:37.974 "req_id": 1 00:18:37.974 } 00:18:37.974 Got JSON-RPC error response 00:18:37.974 response: 00:18:37.974 { 00:18:37.974 "code": -5, 00:18:37.974 "message": "Input/output error" 00:18:37.974 } 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.974 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.232 nvme0n1 00:18:38.232 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.232 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:38.232 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.233 request: 00:18:38.233 { 00:18:38.233 "name": "nvme0", 00:18:38.233 "dhchap_key": "key1", 00:18:38.233 "dhchap_ctrlr_key": "ckey2", 00:18:38.233 "method": "bdev_nvme_set_keys", 00:18:38.233 "req_id": 1 00:18:38.233 } 00:18:38.233 Got JSON-RPC error response 00:18:38.233 response: 00:18:38.233 { 00:18:38.233 "code": -13, 00:18:38.233 "message": "Permission denied" 00:18:38.233 } 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:38.233 13:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI3NzhkZWE2MDFhNDgzMTk1ZWE4YTYyOTY1NmZhODM2ZmU3MjY0NGZmZWUwNDcxP7/3bQ==: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQzZDdhY2Y4ZjMzYWI0OTRkNGQ0MTI1NjU4NGI1NjZjZWNhNzBlZGQ4ZmE5NDA3YD4EmA==: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 nvme0n1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNhYmMxMzFkYjZhM2MyODdiZTRlNjcxNTlkMDViNWP7R8qq: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI4ODkyODA2OTNkY2ZjN2Q4ZTg2ZWExMjc5MDgxMzU3H759: 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 request: 00:18:39.608 { 00:18:39.608 "name": "nvme0", 00:18:39.608 "dhchap_key": "key2", 00:18:39.608 "dhchap_ctrlr_key": "ckey1", 00:18:39.608 "method": "bdev_nvme_set_keys", 00:18:39.608 "req_id": 1 00:18:39.608 } 00:18:39.608 Got JSON-RPC error response 00:18:39.608 response: 00:18:39.608 { 00:18:39.608 "code": -13, 00:18:39.608 "message": "Permission denied" 00:18:39.608 } 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:39.608 13:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.543 13:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:40.543 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.543 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:40.543 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.543 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.543 rmmod nvme_tcp 00:18:40.543 rmmod nvme_fabrics 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78505 ']' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78505 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78505 ']' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78505 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78505 00:18:40.802 killing process with pid 78505 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78505' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78505 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78505 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.802 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.803 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:41.061 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:41.062 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:41.335 13:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:41.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:41.925 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:42.182 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:42.182 13:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.AXY /tmp/spdk.key-null.Ugl /tmp/spdk.key-sha256.ZYp /tmp/spdk.key-sha384.8XI /tmp/spdk.key-sha512.cQk /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:42.183 13:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:42.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:42.440 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:42.440 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:42.698 ************************************ 00:18:42.698 END TEST nvmf_auth_host 00:18:42.698 ************************************ 00:18:42.698 00:18:42.698 real 0m38.581s 00:18:42.698 user 0m34.984s 00:18:42.698 sys 0m4.137s 00:18:42.698 13:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.698 13:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.698 ************************************ 00:18:42.698 START TEST nvmf_digest 00:18:42.698 ************************************ 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:42.698 * Looking for test storage... 00:18:42.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.698 --rc genhtml_branch_coverage=1 00:18:42.698 --rc genhtml_function_coverage=1 00:18:42.698 --rc genhtml_legend=1 00:18:42.698 --rc geninfo_all_blocks=1 00:18:42.698 --rc geninfo_unexecuted_blocks=1 00:18:42.698 00:18:42.698 ' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.698 --rc genhtml_branch_coverage=1 00:18:42.698 --rc genhtml_function_coverage=1 00:18:42.698 --rc genhtml_legend=1 00:18:42.698 --rc geninfo_all_blocks=1 00:18:42.698 --rc geninfo_unexecuted_blocks=1 00:18:42.698 00:18:42.698 ' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.698 --rc genhtml_branch_coverage=1 00:18:42.698 --rc genhtml_function_coverage=1 00:18:42.698 --rc genhtml_legend=1 00:18:42.698 --rc geninfo_all_blocks=1 00:18:42.698 --rc geninfo_unexecuted_blocks=1 00:18:42.698 00:18:42.698 ' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.698 --rc genhtml_branch_coverage=1 00:18:42.698 --rc genhtml_function_coverage=1 00:18:42.698 --rc genhtml_legend=1 00:18:42.698 --rc geninfo_all_blocks=1 00:18:42.698 --rc geninfo_unexecuted_blocks=1 00:18:42.698 00:18:42.698 ' 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.698 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:42.957 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:42.958 Cannot find device "nvmf_init_br" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:42.958 Cannot find device "nvmf_init_br2" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:42.958 Cannot find device "nvmf_tgt_br" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.958 Cannot find device "nvmf_tgt_br2" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:42.958 Cannot find device "nvmf_init_br" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:42.958 Cannot find device "nvmf_init_br2" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:42.958 Cannot find device "nvmf_tgt_br" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:42.958 Cannot find device "nvmf_tgt_br2" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:42.958 Cannot find device "nvmf_br" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:42.958 Cannot find device "nvmf_init_if" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:42.958 Cannot find device "nvmf_init_if2" 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:42.958 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:43.216 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:43.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:43.217 00:18:43.217 --- 10.0.0.3 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:43.217 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:18:43.217 00:18:43.217 --- 10.0.0.4 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:18:43.217 00:18:43.217 --- 10.0.0.1 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:43.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:43.217 00:18:43.217 --- 10.0.0.2 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:43.217 ************************************ 00:18:43.217 START TEST nvmf_digest_clean 00:18:43.217 ************************************ 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80168 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80168 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80168 ']' 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.217 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:43.217 [2024-11-29 13:05:14.709927] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:18:43.217 [2024-11-29 13:05:14.710036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.476 [2024-11-29 13:05:14.861983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.476 [2024-11-29 13:05:14.919206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.476 [2024-11-29 13:05:14.919266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.476 [2024-11-29 13:05:14.919281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.476 [2024-11-29 13:05:14.919291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.476 [2024-11-29 13:05:14.919301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.476 [2024-11-29 13:05:14.919822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.476 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.476 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:43.476 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.476 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.476 13:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 [2024-11-29 13:05:15.075983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:43.734 null0 00:18:43.734 [2024-11-29 13:05:15.129984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.734 [2024-11-29 13:05:15.154099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:43.734 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80194 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80194 /var/tmp/bperf.sock 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80194 ']' 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.735 13:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.735 [2024-11-29 13:05:15.223589] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:18:43.735 [2024-11-29 13:05:15.223724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80194 ] 00:18:43.993 [2024-11-29 13:05:15.377419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.993 [2024-11-29 13:05:15.465961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.927 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.927 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:44.927 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:44.927 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:44.927 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:45.185 [2024-11-29 13:05:16.548720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.186 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:45.186 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:45.444 nvme0n1 00:18:45.701 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:45.701 13:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:45.701 Running I/O for 2 seconds... 00:18:48.010 14986.00 IOPS, 58.54 MiB/s [2024-11-29T13:05:19.525Z] 14795.50 IOPS, 57.79 MiB/s 00:18:48.010 Latency(us) 00:18:48.010 [2024-11-29T13:05:19.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.010 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:48.010 nvme0n1 : 2.01 14784.02 57.75 0.00 0.00 8651.32 7417.48 22043.93 00:18:48.010 [2024-11-29T13:05:19.525Z] =================================================================================================================== 00:18:48.010 [2024-11-29T13:05:19.525Z] Total : 14784.02 57.75 0.00 0.00 8651.32 7417.48 22043.93 00:18:48.010 { 00:18:48.010 "results": [ 00:18:48.010 { 00:18:48.010 "job": "nvme0n1", 00:18:48.010 "core_mask": "0x2", 00:18:48.010 "workload": "randread", 00:18:48.010 "status": "finished", 00:18:48.010 "queue_depth": 128, 00:18:48.010 "io_size": 4096, 00:18:48.010 "runtime": 2.010211, 00:18:48.010 "iops": 14784.020184945759, 00:18:48.010 "mibps": 57.75007884744437, 00:18:48.010 "io_failed": 0, 00:18:48.010 "io_timeout": 0, 00:18:48.010 "avg_latency_us": 8651.316227574036, 00:18:48.010 "min_latency_us": 7417.483636363636, 00:18:48.010 "max_latency_us": 22043.927272727273 00:18:48.010 } 00:18:48.010 ], 00:18:48.010 "core_count": 1 00:18:48.010 } 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:48.010 | select(.opcode=="crc32c") 00:18:48.010 | "\(.module_name) \(.executed)"' 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80194 00:18:48.010 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80194 ']' 00:18:48.011 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80194 00:18:48.011 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:48.011 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.011 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80194 00:18:48.269 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.269 killing process with pid 80194 00:18:48.269 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.269 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80194' 00:18:48.269 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80194 00:18:48.269 Received shutdown signal, test time was about 2.000000 seconds 00:18:48.269 00:18:48.269 Latency(us) 00:18:48.269 [2024-11-29T13:05:19.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.269 [2024-11-29T13:05:19.785Z] =================================================================================================================== 00:18:48.270 [2024-11-29T13:05:19.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.270 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80194 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80254 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80254 /var/tmp/bperf.sock 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:48.527 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80254 ']' 00:18:48.528 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:48.528 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.528 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:48.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:48.528 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.528 13:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:48.528 [2024-11-29 13:05:19.869563] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:18:48.528 [2024-11-29 13:05:19.869711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80254 ] 00:18:48.528 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:48.528 Zero copy mechanism will not be used. 00:18:48.528 [2024-11-29 13:05:20.016345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.785 [2024-11-29 13:05:20.085385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.785 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.785 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:48.785 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:48.785 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:48.785 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:49.043 [2024-11-29 13:05:20.424386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.043 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:49.043 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:49.607 nvme0n1 00:18:49.607 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:49.607 13:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:49.607 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:49.607 Zero copy mechanism will not be used. 00:18:49.607 Running I/O for 2 seconds... 00:18:51.920 6736.00 IOPS, 842.00 MiB/s [2024-11-29T13:05:23.435Z] 6720.00 IOPS, 840.00 MiB/s 00:18:51.920 Latency(us) 00:18:51.920 [2024-11-29T13:05:23.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.920 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:51.920 nvme0n1 : 2.00 6718.45 839.81 0.00 0.00 2378.25 2025.66 8400.52 00:18:51.920 [2024-11-29T13:05:23.435Z] =================================================================================================================== 00:18:51.920 [2024-11-29T13:05:23.435Z] Total : 6718.45 839.81 0.00 0.00 2378.25 2025.66 8400.52 00:18:51.920 { 00:18:51.920 "results": [ 00:18:51.920 { 00:18:51.920 "job": "nvme0n1", 00:18:51.920 "core_mask": "0x2", 00:18:51.920 "workload": "randread", 00:18:51.920 "status": "finished", 00:18:51.920 "queue_depth": 16, 00:18:51.920 "io_size": 131072, 00:18:51.920 "runtime": 2.002842, 00:18:51.920 "iops": 6718.453078175912, 00:18:51.920 "mibps": 839.806634771989, 00:18:51.920 "io_failed": 0, 00:18:51.920 "io_timeout": 0, 00:18:51.920 "avg_latency_us": 2378.2452578099665, 00:18:51.920 "min_latency_us": 2025.658181818182, 00:18:51.920 "max_latency_us": 8400.523636363636 00:18:51.920 } 00:18:51.920 ], 00:18:51.920 "core_count": 1 00:18:51.920 } 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:51.920 | select(.opcode=="crc32c") 00:18:51.920 | "\(.module_name) \(.executed)"' 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80254 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80254 ']' 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80254 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80254 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:51.920 killing process with pid 80254 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80254' 00:18:51.920 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80254 00:18:51.920 Received shutdown signal, test time was about 2.000000 seconds 00:18:51.920 00:18:51.920 Latency(us) 00:18:51.920 [2024-11-29T13:05:23.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.920 [2024-11-29T13:05:23.435Z] =================================================================================================================== 00:18:51.920 [2024-11-29T13:05:23.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.921 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80254 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80307 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80307 /var/tmp/bperf.sock 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80307 ']' 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.186 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:52.186 [2024-11-29 13:05:23.644118] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:18:52.186 [2024-11-29 13:05:23.644226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80307 ] 00:18:52.444 [2024-11-29 13:05:23.792257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.444 [2024-11-29 13:05:23.856394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.444 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.444 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:52.444 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:52.444 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:52.444 13:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:52.702 [2024-11-29 13:05:24.203310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.960 13:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:52.960 13:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:53.219 nvme0n1 00:18:53.219 13:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:53.219 13:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:53.220 Running I/O for 2 seconds... 00:18:55.531 15876.00 IOPS, 62.02 MiB/s [2024-11-29T13:05:27.046Z] 16256.50 IOPS, 63.50 MiB/s 00:18:55.531 Latency(us) 00:18:55.531 [2024-11-29T13:05:27.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.531 nvme0n1 : 2.01 16303.61 63.69 0.00 0.00 7843.61 3306.59 19541.64 00:18:55.531 [2024-11-29T13:05:27.046Z] =================================================================================================================== 00:18:55.531 [2024-11-29T13:05:27.046Z] Total : 16303.61 63.69 0.00 0.00 7843.61 3306.59 19541.64 00:18:55.531 { 00:18:55.531 "results": [ 00:18:55.531 { 00:18:55.531 "job": "nvme0n1", 00:18:55.531 "core_mask": "0x2", 00:18:55.531 "workload": "randwrite", 00:18:55.531 "status": "finished", 00:18:55.531 "queue_depth": 128, 00:18:55.531 "io_size": 4096, 00:18:55.531 "runtime": 2.009862, 00:18:55.531 "iops": 16303.60691430556, 00:18:55.531 "mibps": 63.68596450900609, 00:18:55.531 "io_failed": 0, 00:18:55.531 "io_timeout": 0, 00:18:55.531 "avg_latency_us": 7843.6125, 00:18:55.531 "min_latency_us": 3306.589090909091, 00:18:55.531 "max_latency_us": 19541.643636363635 00:18:55.531 } 00:18:55.531 ], 00:18:55.531 "core_count": 1 00:18:55.531 } 00:18:55.531 13:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:55.531 13:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:55.531 13:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:55.531 13:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:55.531 | select(.opcode=="crc32c") 00:18:55.531 | "\(.module_name) \(.executed)"' 00:18:55.531 13:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80307 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80307 ']' 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80307 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80307 00:18:55.790 killing process with pid 80307 00:18:55.790 Received shutdown signal, test time was about 2.000000 seconds 00:18:55.790 00:18:55.790 Latency(us) 00:18:55.790 [2024-11-29T13:05:27.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.790 [2024-11-29T13:05:27.305Z] =================================================================================================================== 00:18:55.790 [2024-11-29T13:05:27.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80307' 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80307 00:18:55.790 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80307 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80361 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80361 /var/tmp/bperf.sock 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80361 ']' 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:56.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:56.048 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.049 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:56.049 [2024-11-29 13:05:27.411562] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:18:56.049 [2024-11-29 13:05:27.412174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80361 ] 00:18:56.049 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:56.049 Zero copy mechanism will not be used. 00:18:56.049 [2024-11-29 13:05:27.554765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.308 [2024-11-29 13:05:27.646117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.308 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.308 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:56.308 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:56.308 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:56.308 13:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:56.566 [2024-11-29 13:05:28.004652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:56.566 13:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:56.566 13:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:57.131 nvme0n1 00:18:57.131 13:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:57.131 13:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:57.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:57.131 Zero copy mechanism will not be used. 00:18:57.131 Running I/O for 2 seconds... 00:18:59.445 5399.00 IOPS, 674.88 MiB/s [2024-11-29T13:05:30.960Z] 5372.00 IOPS, 671.50 MiB/s 00:18:59.445 Latency(us) 00:18:59.445 [2024-11-29T13:05:30.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.445 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:59.445 nvme0n1 : 2.00 5371.05 671.38 0.00 0.00 2973.09 2085.24 6702.55 00:18:59.445 [2024-11-29T13:05:30.960Z] =================================================================================================================== 00:18:59.445 [2024-11-29T13:05:30.960Z] Total : 5371.05 671.38 0.00 0.00 2973.09 2085.24 6702.55 00:18:59.445 { 00:18:59.445 "results": [ 00:18:59.445 { 00:18:59.445 "job": "nvme0n1", 00:18:59.445 "core_mask": "0x2", 00:18:59.445 "workload": "randwrite", 00:18:59.445 "status": "finished", 00:18:59.445 "queue_depth": 16, 00:18:59.445 "io_size": 131072, 00:18:59.445 "runtime": 2.00445, 00:18:59.445 "iops": 5371.049415051511, 00:18:59.445 "mibps": 671.3811768814388, 00:18:59.445 "io_failed": 0, 00:18:59.445 "io_timeout": 0, 00:18:59.445 "avg_latency_us": 2973.0915575971494, 00:18:59.445 "min_latency_us": 2085.2363636363634, 00:18:59.445 "max_latency_us": 6702.545454545455 00:18:59.445 } 00:18:59.445 ], 00:18:59.445 "core_count": 1 00:18:59.445 } 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:59.445 | select(.opcode=="crc32c") 00:18:59.445 | "\(.module_name) \(.executed)"' 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80361 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80361 ']' 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80361 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80361 00:18:59.445 killing process with pid 80361 00:18:59.445 Received shutdown signal, test time was about 2.000000 seconds 00:18:59.445 00:18:59.445 Latency(us) 00:18:59.445 [2024-11-29T13:05:30.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.445 [2024-11-29T13:05:30.960Z] =================================================================================================================== 00:18:59.445 [2024-11-29T13:05:30.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80361' 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80361 00:18:59.445 13:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80361 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80168 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80168 ']' 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80168 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.703 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80168 00:18:59.962 killing process with pid 80168 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80168' 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80168 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80168 00:18:59.962 ************************************ 00:18:59.962 END TEST nvmf_digest_clean 00:18:59.962 ************************************ 00:18:59.962 00:18:59.962 real 0m16.787s 00:18:59.962 user 0m31.837s 00:18:59.962 sys 0m5.704s 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.962 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 ************************************ 00:19:00.221 START TEST nvmf_digest_error 00:19:00.221 ************************************ 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80441 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80441 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80441 ']' 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.221 13:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 [2024-11-29 13:05:31.547262] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:00.221 [2024-11-29 13:05:31.547404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.221 [2024-11-29 13:05:31.690605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.478 [2024-11-29 13:05:31.745553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.478 [2024-11-29 13:05:31.745610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.478 [2024-11-29 13:05:31.745621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.478 [2024-11-29 13:05:31.745629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.478 [2024-11-29 13:05:31.745636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.478 [2024-11-29 13:05:31.746114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.044 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.044 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:01.044 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.044 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.044 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.302 [2024-11-29 13:05:32.570801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.302 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.302 [2024-11-29 13:05:32.634354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:01.302 null0 00:19:01.302 [2024-11-29 13:05:32.689184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.303 [2024-11-29 13:05:32.713375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80474 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80474 /var/tmp/bperf.sock 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80474 ']' 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:01.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.303 13:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 [2024-11-29 13:05:32.775448] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:01.303 [2024-11-29 13:05:32.775888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80474 ] 00:19:01.574 [2024-11-29 13:05:32.922481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.574 [2024-11-29 13:05:32.997564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.574 [2024-11-29 13:05:33.075202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:01.866 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.866 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:01.866 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:01.866 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:02.124 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:02.124 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.124 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.124 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.124 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.125 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.383 nvme0n1 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:02.383 13:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:02.383 Running I/O for 2 seconds... 00:19:02.642 [2024-11-29 13:05:33.896655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.896713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.896729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:33.913962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.914008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.914036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:33.931825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.932140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:33.949763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.949801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.949829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:33.967594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.967631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:33.985170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:33.985207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:33.985219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.002221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.002287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.019491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.019527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.019554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.036788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.036842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.036870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.054165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.054203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.054214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.071752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.071789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.071816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.089360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.089407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.089420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.106600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.106636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.106680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.123982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.124047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.124076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.642 [2024-11-29 13:05:34.140955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.642 [2024-11-29 13:05:34.141022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.642 [2024-11-29 13:05:34.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.158215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.158266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.175801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.175855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.175884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.193316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.193382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.193411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.211006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.211043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.211056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.228514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.228572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.228600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.245959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.246023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.246052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.263360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.263432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.263479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.280869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.281171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.281189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.298532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.298570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.298597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.316018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.316108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.316122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.333595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.333633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.351115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.351169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.351181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.368259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.368298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.368309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.385812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.386023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.386055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:02.902 [2024-11-29 13:05:34.403445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:02.902 [2024-11-29 13:05:34.403653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.902 [2024-11-29 13:05:34.403668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.421954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.422005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.422018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.440555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.440590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.440618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.458923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.459006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.459021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.476383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.476424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.476438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.493924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.493971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.493984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.511320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.511356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.528980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.529015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.529043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.546338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.546575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.546592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.564274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.564478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.582082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.582136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.582149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.599813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.599851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.599878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.618088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.618200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.637936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.638060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.161 [2024-11-29 13:05:34.655907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.161 [2024-11-29 13:05:34.656156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.161 [2024-11-29 13:05:34.656189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.673601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.673672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.691391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.691611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.691643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.709213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.709423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.726966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.727048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.727078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.744553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.744589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.744617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.762157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.762196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.779566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.779603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.779630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.797089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.797328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.797347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.815771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.815819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.815849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.834779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.834825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.419 [2024-11-29 13:05:34.834839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.419 [2024-11-29 13:05:34.852470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.419 [2024-11-29 13:05:34.852535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.420 [2024-11-29 13:05:34.852547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.420 [2024-11-29 13:05:34.871311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.420 [2024-11-29 13:05:34.871350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.420 [2024-11-29 13:05:34.871378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.420 14169.00 IOPS, 55.35 MiB/s [2024-11-29T13:05:34.935Z] [2024-11-29 13:05:34.888761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.420 [2024-11-29 13:05:34.888798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.420 [2024-11-29 13:05:34.888826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.420 [2024-11-29 13:05:34.906540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.420 [2024-11-29 13:05:34.906612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.420 [2024-11-29 13:05:34.906642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.420 [2024-11-29 13:05:34.923931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.420 [2024-11-29 13:05:34.924197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.420 [2024-11-29 13:05:34.924213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:34.942039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:34.942077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:34.942090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:34.960156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:34.960194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:34.960222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:34.977818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:34.977853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:34.977880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:34.995543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:34.995579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:34.995606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.020628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.020708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.038105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.038350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.038367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.055793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.055831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.055858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.073567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.073604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.073632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.091056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.091095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.091107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.108548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.108584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.108612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.126256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.126295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.144239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.144291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.144304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.161770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.161808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.161836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.679 [2024-11-29 13:05:35.179706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.679 [2024-11-29 13:05:35.179937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.679 [2024-11-29 13:05:35.179955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.937 [2024-11-29 13:05:35.197160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.197197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.197224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.214404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.214456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.214484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.232054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.232219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.232236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.249853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.249904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.249933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.267396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.267467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.267496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.284852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.285061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.285093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.302671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.302710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.302738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.320012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.320081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.320109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.337230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.337423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.337457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.354540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.354577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.354605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.372152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.372191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.372203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.389354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.389551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.389582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.406939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.407001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.407014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.424204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.424258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.424270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.938 [2024-11-29 13:05:35.441481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:03.938 [2024-11-29 13:05:35.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.938 [2024-11-29 13:05:35.441757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.460317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.460373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.460386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.478531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.478567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.478595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.496433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.496523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.496551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.514212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.514394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.514411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.531676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.531714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.531742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.548955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.548993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.549021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.566357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.566426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.566439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.583798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.583838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.583856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.601274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.601558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.601576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.618903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.618992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.619023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.636571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.636611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.197 [2024-11-29 13:05:35.636624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.197 [2024-11-29 13:05:35.654028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.197 [2024-11-29 13:05:35.654065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.198 [2024-11-29 13:05:35.654093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.198 [2024-11-29 13:05:35.671192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.198 [2024-11-29 13:05:35.671364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.198 [2024-11-29 13:05:35.671397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.198 [2024-11-29 13:05:35.688898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.198 [2024-11-29 13:05:35.689105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.198 [2024-11-29 13:05:35.689133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.198 [2024-11-29 13:05:35.706266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.198 [2024-11-29 13:05:35.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.198 [2024-11-29 13:05:35.706316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.723823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.724044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.724071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.741345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.741400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.741412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.759243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.759468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.759517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.777047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.777084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.777112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.794361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.794401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.794414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.811739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.811777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.811804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.829999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.830087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.830127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.849866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.849942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.849958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 [2024-11-29 13:05:35.868443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be6fb0) 00:19:04.457 [2024-11-29 13:05:35.868507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.457 [2024-11-29 13:05:35.868550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.457 14295.00 IOPS, 55.84 MiB/s 00:19:04.457 Latency(us) 00:19:04.457 [2024-11-29T13:05:35.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.457 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:04.457 nvme0n1 : 2.01 14281.56 55.79 0.00 0.00 8954.92 8281.37 33840.41 00:19:04.457 [2024-11-29T13:05:35.972Z] =================================================================================================================== 00:19:04.457 [2024-11-29T13:05:35.972Z] Total : 14281.56 55.79 0.00 0.00 8954.92 8281.37 33840.41 00:19:04.457 { 00:19:04.457 "results": [ 00:19:04.457 { 00:19:04.457 "job": "nvme0n1", 00:19:04.457 "core_mask": "0x2", 00:19:04.457 "workload": "randread", 00:19:04.457 "status": "finished", 00:19:04.457 "queue_depth": 128, 00:19:04.457 "io_size": 4096, 00:19:04.457 "runtime": 2.010845, 00:19:04.457 "iops": 14281.558250387276, 00:19:04.457 "mibps": 55.787336915575295, 00:19:04.457 "io_failed": 0, 00:19:04.457 "io_timeout": 0, 00:19:04.457 "avg_latency_us": 8954.917310271037, 00:19:04.457 "min_latency_us": 8281.367272727273, 00:19:04.457 "max_latency_us": 33840.40727272727 00:19:04.457 } 00:19:04.457 ], 00:19:04.457 "core_count": 1 00:19:04.457 } 00:19:04.457 13:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:04.457 13:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:04.457 13:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:04.457 | .driver_specific 00:19:04.457 | .nvme_error 00:19:04.457 | .status_code 00:19:04.457 | .command_transient_transport_error' 00:19:04.457 13:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80474 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80474 ']' 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80474 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.716 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80474 00:19:04.974 killing process with pid 80474 00:19:04.974 Received shutdown signal, test time was about 2.000000 seconds 00:19:04.974 00:19:04.974 Latency(us) 00:19:04.974 [2024-11-29T13:05:36.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.974 [2024-11-29T13:05:36.489Z] =================================================================================================================== 00:19:04.974 [2024-11-29T13:05:36.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.974 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:04.974 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:04.974 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80474' 00:19:04.974 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80474 00:19:04.974 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80474 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80528 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80528 /var/tmp/bperf.sock 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80528 ']' 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:05.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.234 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:05.234 [2024-11-29 13:05:36.590356] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:05.234 [2024-11-29 13:05:36.590731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80528 ] 00:19:05.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:05.234 Zero copy mechanism will not be used. 00:19:05.234 [2024-11-29 13:05:36.732605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.492 [2024-11-29 13:05:36.808471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.492 [2024-11-29 13:05:36.891363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.492 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.492 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:05.492 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:05.492 13:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.059 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.319 nvme0n1 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:06.319 13:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:06.319 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:06.319 Zero copy mechanism will not be used. 00:19:06.319 Running I/O for 2 seconds... 00:19:06.319 [2024-11-29 13:05:37.780514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.780589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.780612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.786454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.786509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.786538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.791729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.791796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.791840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.797146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.797189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.797201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.802398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.802436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.802448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.807617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.807687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.807726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.813095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.813170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.818563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.818598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.823795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.823830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.823857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.319 [2024-11-29 13:05:37.829169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.319 [2024-11-29 13:05:37.829205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.319 [2024-11-29 13:05:37.829217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.834517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.834565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.834593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.839902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.840168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.840186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.845432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.845497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.845509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.850579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.850615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.850643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.855821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.856044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.856077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.861320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.861356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.861369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.866592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.866627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.866639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.871857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.872145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.872162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.579 [2024-11-29 13:05:37.877553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.579 [2024-11-29 13:05:37.877604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.579 [2024-11-29 13:05:37.877617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.883105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.883146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.883159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.888574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.888611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.888623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.894231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.894411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.894427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.899966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.900006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.900021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.905517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.905707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.905722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.911054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.911097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.911111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.916513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.916681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.916713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.922324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.922363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.922393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.927711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.927930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.927948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.933057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.933093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.933154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.938203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.938240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.938283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.943398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.943649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.949032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.949096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.954403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.954590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.954621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.959894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.960125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.960426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.965769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.966006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.966191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.971487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.971693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.971847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.977022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.977250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.977367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.982452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.982678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.983045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.988433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.988720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.988867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:37.994289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:37.994524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:37.994752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.000120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.000322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.005857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.006061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.006093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.011315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.011354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.016685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.016721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.016749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.021944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.022179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.022196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.580 [2024-11-29 13:05:38.027532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.580 [2024-11-29 13:05:38.027569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.580 [2024-11-29 13:05:38.027597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.032655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.032693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.032723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.037882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.038082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.038124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.043411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.043475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.043507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.048767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.048802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.048830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.054035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.054070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.054098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.059106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.059142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.064387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.064422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.064450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.069647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.069840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.069855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.075117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.075157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.075185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.080374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.080609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.080644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.581 [2024-11-29 13:05:38.085847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.581 [2024-11-29 13:05:38.085928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.581 [2024-11-29 13:05:38.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.091097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.091135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.091164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.096433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.096469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.096527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.101634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.101718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.101746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.106914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.107148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.107165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.112302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.112343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.112373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.117460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.117522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.117557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.122769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.123000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.123033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.128096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.128152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.128181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.133394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.133434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.133479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.138711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.138885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.842 [2024-11-29 13:05:38.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.842 [2024-11-29 13:05:38.144111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.842 [2024-11-29 13:05:38.144174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.144202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.149180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.149229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.149258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.154542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.154715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.154730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.159843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.159941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.165133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.165179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.165206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.170291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.170327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.170355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.175831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.175866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.175919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.181268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.181309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.181322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.186578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.186612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.186639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.191720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.191768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.191797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.197067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.197103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.197146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.202282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.202323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.202336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.207618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.207653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.207681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.212955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.213173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.213190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.218727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.218767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.218797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.224060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.224101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.224125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.229599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.229635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.229663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.234962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.235049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.235082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.240368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.240406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.245847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.245912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.245941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.251192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.251232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.251261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.256385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.256423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.256462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.261579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.261630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.267032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.267070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.267098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.272592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.272628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.272656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.277957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.278007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.278035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.283198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.843 [2024-11-29 13:05:38.283236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.843 [2024-11-29 13:05:38.283266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.843 [2024-11-29 13:05:38.288447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.288511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.293908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.294221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.299345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.299383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.299411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.304668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.304770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.304800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.310153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.310189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.310218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.315514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.315553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.315582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.320854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.320919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.320949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.326326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.326366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.326395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.331469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.331511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.331539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.336730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.336765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.336793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.341994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.342047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.342075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.347329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.347366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.347395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.844 [2024-11-29 13:05:38.352540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:06.844 [2024-11-29 13:05:38.352576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.844 [2024-11-29 13:05:38.352603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.357715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.357908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.357927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.363053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.363091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.368316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.368352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.368381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.373487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.373693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.373709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.378757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.378798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.378826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.384298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.384488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.384523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.389962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.390030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.390059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.395332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.395573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.395590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.400819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.400856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.400884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.405969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.406070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.406085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.411463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.411509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.411537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.416829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.416864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.416913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.421980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.422039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.422052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.427168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.427232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.432351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.432386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.432414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.437517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.437726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.104 [2024-11-29 13:05:38.443108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.104 [2024-11-29 13:05:38.443162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.104 [2024-11-29 13:05:38.443191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.448308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.448344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.448372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.453722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.453936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.453954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.459181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.459220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.459233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.464259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.464305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.464334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.469612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.469647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.469675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.474779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.474816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.474845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.480107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.480160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.480188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.485297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.485400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.485429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.490674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.490858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.490874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.496005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.496042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.496070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.501184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.501219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.501246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.506188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.506223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.506251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.511432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.511485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.511512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.516780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.516968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.516985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.522495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.522547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.522576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.527869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.527958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.527988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.533348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.533386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.533420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.539048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.539087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.539100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.544708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.544742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.544770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.550370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.550405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.550443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.555928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.556151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.556183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.561458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.561534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.561563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.566723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.566765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.566793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.572131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.572168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.577531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.577616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.582999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.583068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.105 [2024-11-29 13:05:38.583082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.105 [2024-11-29 13:05:38.588484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.105 [2024-11-29 13:05:38.588531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.106 [2024-11-29 13:05:38.588558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.106 [2024-11-29 13:05:38.594021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.106 [2024-11-29 13:05:38.594057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.106 [2024-11-29 13:05:38.594085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.106 [2024-11-29 13:05:38.599530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.106 [2024-11-29 13:05:38.599716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.106 [2024-11-29 13:05:38.599732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.106 [2024-11-29 13:05:38.605293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.106 [2024-11-29 13:05:38.605344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.106 [2024-11-29 13:05:38.605380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.106 [2024-11-29 13:05:38.610742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.106 [2024-11-29 13:05:38.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.106 [2024-11-29 13:05:38.610965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.616559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.616597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.621960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.622024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.622044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.627337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.627373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.627401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.632557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.632629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.632641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.638001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.638032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.638059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.643143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.643183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.643196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.648301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.648336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.648364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.653494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.653697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.653728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.658961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.659032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.659062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.664328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.664540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.669852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.669924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.669953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.675155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.675195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.675208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.680268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.680303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.366 [2024-11-29 13:05:38.680331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.366 [2024-11-29 13:05:38.685534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.366 [2024-11-29 13:05:38.685602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.685631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.690931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.691211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.696553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.696591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.696620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.701864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.701943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.701955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.707320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.707358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.707387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.712659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.712693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.712720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.717991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.718027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.718056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.723280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.723362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.723390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.728657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.728707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.728742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.733834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.734043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.734075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.739230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.739334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.739363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.744497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.744538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.744576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.749722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.749934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.749969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.755177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.755232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.755278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.760614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.760649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.760677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.765915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.766163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.766196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.771591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.771636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 5735.00 IOPS, 716.88 MiB/s [2024-11-29T13:05:38.882Z] [2024-11-29 13:05:38.778774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.778811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.778838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.783987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.784025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.784054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.789361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.789565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.789582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.794809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.794845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.794874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.800029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.800061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.800088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.805221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.805257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.810562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.810624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.815994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.816073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.821240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.821277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.821305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.826588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.826625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.367 [2024-11-29 13:05:38.826653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.367 [2024-11-29 13:05:38.831951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.367 [2024-11-29 13:05:38.832214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.832247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.837613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.837650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.837677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.842862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.842949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.842964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.848047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.848080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.848108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.853353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.853392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.853421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.858565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.858627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.863821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.864038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.864070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.869277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.869314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.869343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.368 [2024-11-29 13:05:38.874329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.368 [2024-11-29 13:05:38.874371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.368 [2024-11-29 13:05:38.874400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.879664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.879827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.879858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.885177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.885213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.885241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.890248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.890419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.890436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.895702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.895739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.895766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.900963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.901064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.901107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.906789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.906842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.906870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.912328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.912403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.912417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.917765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.917995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.918013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.923518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.923570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.923597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.929135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.654 [2024-11-29 13:05:38.929174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.654 [2024-11-29 13:05:38.929218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.654 [2024-11-29 13:05:38.934791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.934825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.940575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.940767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.946288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.946324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.951745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.951939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.951958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.957175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.957212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.957239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.962389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.962429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.962442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.967881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.968109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.968150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.973348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.973386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.973414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.978741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.978774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.978801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.984048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.984108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.989171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.989206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.989235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.994284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.994319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.994347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:38.999505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:38.999676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:38.999707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.004998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.005032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.005060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.010380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.010578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.015791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.015846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.015875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.021323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.021359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.021386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.026374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.026413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.026442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.031564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.031598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.037002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.037066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.037096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.042295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.042331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.042359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.047590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.047682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.052968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.053211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.053227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.058370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.058436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.063617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.063652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.063680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.069211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.069263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.655 [2024-11-29 13:05:39.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.655 [2024-11-29 13:05:39.074388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.655 [2024-11-29 13:05:39.074440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.074468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.079637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.079669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.079698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.085006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.085057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.085084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.090213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.090250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.090278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.095446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.095485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.095512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.100553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.100733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.100748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.105989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.106022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.106050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.110908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.110965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.111000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.116061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.116100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.116128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.121291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.121325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.121367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.126543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.126583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.126610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.131673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.131953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.131972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.656 [2024-11-29 13:05:39.137004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.656 [2024-11-29 13:05:39.137067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.656 [2024-11-29 13:05:39.137095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.142355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.142395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.142409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.147638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.147791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.147821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.153069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.153103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.153148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.158396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.158443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.163682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.163836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.163867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.169224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.169275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.169304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.174671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.174715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.174742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.180023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.180058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.180086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.185367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.185436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.185483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.190715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.190751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.190778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.195958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.196005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.196034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.201332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.201367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.201395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.206636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.206690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.206718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.211973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.212210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.212226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.916 [2024-11-29 13:05:39.217551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.916 [2024-11-29 13:05:39.217590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.916 [2024-11-29 13:05:39.217619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.222806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.222840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.227839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.228088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.233259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.233295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.233323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.238446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.238501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.238537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.243646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.243815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.243848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.249247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.249284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.249311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.254291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.254326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.254354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.259548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.259702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.259733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.264729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.264861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.269959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.269997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.270025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.275276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.275370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.275384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.280703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.280738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.280766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.286004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.286041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.286069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.291318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.291355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.291382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.296419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.296470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.296498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.301842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.302037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.302070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.307505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.307557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.307585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.312855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.312921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.312951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.318314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.318352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.318363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.323824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.323859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.323930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.329093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.329160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.329188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.334546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.334580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.334607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.339855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.339936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.339964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.345102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.345166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.350214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.350249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.350277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.355470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.355726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.917 [2024-11-29 13:05:39.361005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.917 [2024-11-29 13:05:39.361057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.917 [2024-11-29 13:05:39.361085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.366294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.366449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.366476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.371929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.371996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.372024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.377323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.377569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.377586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.382752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.382788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.382816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.388133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.388217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.393416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.393453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.393495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.398659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.398711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.398739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.403978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.404209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.409524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.414882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.414951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.415016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.420323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.420359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.420387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.918 [2024-11-29 13:05:39.425982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:07.918 [2024-11-29 13:05:39.426035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.918 [2024-11-29 13:05:39.426051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.178 [2024-11-29 13:05:39.431454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.178 [2024-11-29 13:05:39.431671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.178 [2024-11-29 13:05:39.431703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.178 [2024-11-29 13:05:39.437243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.178 [2024-11-29 13:05:39.437279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.178 [2024-11-29 13:05:39.437307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.178 [2024-11-29 13:05:39.442551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.178 [2024-11-29 13:05:39.442782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.442799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.448375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.448412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.448441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.453759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.453959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.453992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.459421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.459504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.459517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.464765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.464801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.464829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.470100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.470146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.470174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.475209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.475261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.475290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.480426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.480470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.480498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.485741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.485929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.485961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.491101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.491142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.491156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.496537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.496573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.501631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.501881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.507114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.507152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.507181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.512517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.512569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.512598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.517657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.517832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.517865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.523002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.523042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.523071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.528250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.528411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.528427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.533879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.533960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.533990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.539175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.539212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.539241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.544520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.544562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.544590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.549855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.549954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.549998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.555248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.555302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.555347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.560787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.560824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.560852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.566324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.566476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.566494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.572004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.572041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.572069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.577553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.577760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.179 [2024-11-29 13:05:39.577777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.179 [2024-11-29 13:05:39.583254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.179 [2024-11-29 13:05:39.583345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.583373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.588559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.588729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.588760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.594149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.594202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.594231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.599419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.599643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.599676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.604998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.605035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.605063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.610447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.610640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.610671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.615952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.616018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.616048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.621305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.621538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.626604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.626640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.626668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.631815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.631851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.631894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.636921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.637136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.637168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.642061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.642095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.642141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.647303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.647357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.647371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.652743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.652920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.652951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.658075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.658126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.658154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.663391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.663460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.663472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.668668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.668860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.668876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.674310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.679692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.679877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.679926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.180 [2024-11-29 13:05:39.685248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.180 [2024-11-29 13:05:39.685288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.180 [2024-11-29 13:05:39.685319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.690673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.690854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.690885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.696191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.696232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.696246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.701406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.701443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.701490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.706593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.706782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.706799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.712207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.712258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.712286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.717445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.717481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.717532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.722272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.722462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.440 [2024-11-29 13:05:39.722514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.440 [2024-11-29 13:05:39.727755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.440 [2024-11-29 13:05:39.727790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.727818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.733154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.733191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.733218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.738299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.738334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.738361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.743541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.743574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.743601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.748537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.748766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.748782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.754033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.754069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.759253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.759498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.764658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.764710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.764738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.769842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.769906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.441 [2024-11-29 13:05:39.775092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cf59b0) 00:19:08.441 [2024-11-29 13:05:39.775128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.441 [2024-11-29 13:05:39.775156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.441 5758.00 IOPS, 719.75 MiB/s 00:19:08.441 Latency(us) 00:19:08.441 [2024-11-29T13:05:39.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:08.441 nvme0n1 : 2.00 5756.00 719.50 0.00 0.00 2776.15 2338.44 7983.48 00:19:08.441 [2024-11-29T13:05:39.956Z] =================================================================================================================== 00:19:08.441 [2024-11-29T13:05:39.956Z] Total : 5756.00 719.50 0.00 0.00 2776.15 2338.44 7983.48 00:19:08.441 { 00:19:08.441 "results": [ 00:19:08.441 { 00:19:08.441 "job": "nvme0n1", 00:19:08.441 "core_mask": "0x2", 00:19:08.441 "workload": "randread", 00:19:08.441 "status": "finished", 00:19:08.441 "queue_depth": 16, 00:19:08.441 "io_size": 131072, 00:19:08.441 "runtime": 2.003475, 00:19:08.441 "iops": 5755.9989518212105, 00:19:08.441 "mibps": 719.4998689776513, 00:19:08.441 "io_failed": 0, 00:19:08.441 "io_timeout": 0, 00:19:08.441 "avg_latency_us": 2776.1452729164694, 00:19:08.441 "min_latency_us": 2338.4436363636364, 00:19:08.441 "max_latency_us": 7983.476363636363 00:19:08.441 } 00:19:08.441 ], 00:19:08.441 "core_count": 1 00:19:08.441 } 00:19:08.441 13:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:08.441 13:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:08.441 13:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:08.441 | .driver_specific 00:19:08.441 | .nvme_error 00:19:08.441 | .status_code 00:19:08.441 | .command_transient_transport_error' 00:19:08.441 13:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:08.699 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:19:08.699 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80528 00:19:08.699 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80528 ']' 00:19:08.700 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80528 00:19:08.700 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:08.700 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.700 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80528 00:19:08.958 killing process with pid 80528 00:19:08.958 Received shutdown signal, test time was about 2.000000 seconds 00:19:08.958 00:19:08.958 Latency(us) 00:19:08.958 [2024-11-29T13:05:40.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.958 [2024-11-29T13:05:40.473Z] =================================================================================================================== 00:19:08.958 [2024-11-29T13:05:40.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.958 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.958 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.958 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80528' 00:19:08.958 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80528 00:19:08.958 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80528 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80575 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80575 /var/tmp/bperf.sock 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80575 ']' 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:09.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.217 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:09.217 [2024-11-29 13:05:40.542135] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:09.217 [2024-11-29 13:05:40.542413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80575 ] 00:19:09.217 [2024-11-29 13:05:40.688159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.475 [2024-11-29 13:05:40.763367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.475 [2024-11-29 13:05:40.840910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.475 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.475 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:09.475 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:09.475 13:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.733 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:10.300 nvme0n1 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:10.300 13:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:10.300 Running I/O for 2 seconds... 00:19:10.300 [2024-11-29 13:05:41.771830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef7100 00:19:10.300 [2024-11-29 13:05:41.773805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.300 [2024-11-29 13:05:41.773847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.300 [2024-11-29 13:05:41.788461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef7970 00:19:10.300 [2024-11-29 13:05:41.790091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.300 [2024-11-29 13:05:41.790126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.300 [2024-11-29 13:05:41.804523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef81e0 00:19:10.300 [2024-11-29 13:05:41.806207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.300 [2024-11-29 13:05:41.806242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.820304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef8a50 00:19:10.642 [2024-11-29 13:05:41.821960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.822174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.837154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef92c0 00:19:10.642 [2024-11-29 13:05:41.839093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.839132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.854951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef9b30 00:19:10.642 [2024-11-29 13:05:41.856632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.856666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.871683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efa3a0 00:19:10.642 [2024-11-29 13:05:41.873289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.873322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.888053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efac10 00:19:10.642 [2024-11-29 13:05:41.889812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.889848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.904668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efb480 00:19:10.642 [2024-11-29 13:05:41.906246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.642 [2024-11-29 13:05:41.906279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.642 [2024-11-29 13:05:41.921028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efbcf0 00:19:10.642 [2024-11-29 13:05:41.922541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:41.922574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:41.937391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efc560 00:19:10.643 [2024-11-29 13:05:41.938849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:41.938912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:41.953214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efcdd0 00:19:10.643 [2024-11-29 13:05:41.954880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:41.954936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:41.969627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efd640 00:19:10.643 [2024-11-29 13:05:41.971154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:41.971192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:41.986615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efdeb0 00:19:10.643 [2024-11-29 13:05:41.988276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:41.988311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.004290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efe720 00:19:10.643 [2024-11-29 13:05:42.005670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.005705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.021050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eff3c8 00:19:10.643 [2024-11-29 13:05:42.022395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.022427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.044232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eff3c8 00:19:10.643 [2024-11-29 13:05:42.047111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.060776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efe720 00:19:10.643 [2024-11-29 13:05:42.063246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.063284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.076204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efdeb0 00:19:10.643 [2024-11-29 13:05:42.078749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.092130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efd640 00:19:10.643 [2024-11-29 13:05:42.094569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.094783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.108159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efcdd0 00:19:10.643 [2024-11-29 13:05:42.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.110733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.643 [2024-11-29 13:05:42.124248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efc560 00:19:10.643 [2024-11-29 13:05:42.126666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.643 [2024-11-29 13:05:42.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.903 [2024-11-29 13:05:42.140082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efbcf0 00:19:10.903 [2024-11-29 13:05:42.142591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.903 [2024-11-29 13:05:42.142758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.156057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efb480 00:19:10.904 [2024-11-29 13:05:42.158494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.158527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.171988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efac10 00:19:10.904 [2024-11-29 13:05:42.174399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.174432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.187836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016efa3a0 00:19:10.904 [2024-11-29 13:05:42.190315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.190368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.204752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef9b30 00:19:10.904 [2024-11-29 13:05:42.207142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.207311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.221014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef92c0 00:19:10.904 [2024-11-29 13:05:42.223312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.223365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.236905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef8a50 00:19:10.904 [2024-11-29 13:05:42.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.239211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.252770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef81e0 00:19:10.904 [2024-11-29 13:05:42.255129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.255177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.268793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef7970 00:19:10.904 [2024-11-29 13:05:42.271015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.271182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.284986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef7100 00:19:10.904 [2024-11-29 13:05:42.287333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.287379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.301266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef6890 00:19:10.904 [2024-11-29 13:05:42.303349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.303383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.316712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef6020 00:19:10.904 [2024-11-29 13:05:42.318899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.318995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.332757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef57b0 00:19:10.904 [2024-11-29 13:05:42.334877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.334933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.348792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef4f40 00:19:10.904 [2024-11-29 13:05:42.351064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.351100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.365075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef46d0 00:19:10.904 [2024-11-29 13:05:42.367157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.367195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.381145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef3e60 00:19:10.904 [2024-11-29 13:05:42.383367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.397142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef35f0 00:19:10.904 [2024-11-29 13:05:42.399525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.399705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.904 [2024-11-29 13:05:42.413435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef2d80 00:19:10.904 [2024-11-29 13:05:42.415804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.904 [2024-11-29 13:05:42.416022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:11.163 [2024-11-29 13:05:42.429487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef2510 00:19:11.164 [2024-11-29 13:05:42.431848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.432062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.445839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef1ca0 00:19:11.164 [2024-11-29 13:05:42.448151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.448357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.462217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef1430 00:19:11.164 [2024-11-29 13:05:42.464506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.464740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.478524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef0bc0 00:19:11.164 [2024-11-29 13:05:42.480749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.480951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.494954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef0350 00:19:11.164 [2024-11-29 13:05:42.497222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.497400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.511186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eefae0 00:19:11.164 [2024-11-29 13:05:42.513371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.527024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eef270 00:19:11.164 [2024-11-29 13:05:42.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.529117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.542538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeea00 00:19:11.164 [2024-11-29 13:05:42.544487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.544518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.558403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eee190 00:19:11.164 [2024-11-29 13:05:42.560290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.560339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.574132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eed920 00:19:11.164 [2024-11-29 13:05:42.576015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.589869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eed0b0 00:19:11.164 [2024-11-29 13:05:42.591851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.592110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.605893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eec840 00:19:11.164 [2024-11-29 13:05:42.608138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.608362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.622629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eebfd0 00:19:11.164 [2024-11-29 13:05:42.624839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.625110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.639816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeb760 00:19:11.164 [2024-11-29 13:05:42.641956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.642145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.656387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeaef0 00:19:11.164 [2024-11-29 13:05:42.658477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.658647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:11.164 [2024-11-29 13:05:42.673441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eea680 00:19:11.164 [2024-11-29 13:05:42.675487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.164 [2024-11-29 13:05:42.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.690117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee9e10 00:19:11.425 [2024-11-29 13:05:42.692174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.692348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.706589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee95a0 00:19:11.425 [2024-11-29 13:05:42.708604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.708851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.723209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee8d30 00:19:11.425 [2024-11-29 13:05:42.725224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.725259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.739409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee84c0 00:19:11.425 [2024-11-29 13:05:42.741089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.741121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.755275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016 15435.00 IOPS, 60.29 MiB/s [2024-11-29T13:05:42.940Z] ee7c50 00:19:11.425 [2024-11-29 13:05:42.757188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.757221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.770864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee73e0 00:19:11.425 [2024-11-29 13:05:42.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.772790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.787146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee6b70 00:19:11.425 [2024-11-29 13:05:42.788828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.788860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.803529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee6300 00:19:11.425 [2024-11-29 13:05:42.805123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.805156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.819285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee5a90 00:19:11.425 [2024-11-29 13:05:42.820887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.820947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.834953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee5220 00:19:11.425 [2024-11-29 13:05:42.836630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.836664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.850900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee49b0 00:19:11.425 [2024-11-29 13:05:42.852547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.852580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.866428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee4140 00:19:11.425 [2024-11-29 13:05:42.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.868189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.881861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee38d0 00:19:11.425 [2024-11-29 13:05:42.883393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.883572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.897705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee3060 00:19:11.425 [2024-11-29 13:05:42.899246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.899284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.913440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee27f0 00:19:11.425 [2024-11-29 13:05:42.915006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.425 [2024-11-29 13:05:42.915085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.425 [2024-11-29 13:05:42.929263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee1f80 00:19:11.426 [2024-11-29 13:05:42.930743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.426 [2024-11-29 13:05:42.930776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:42.945066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee1710 00:19:11.685 [2024-11-29 13:05:42.946558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:42.946590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:42.960787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee0ea0 00:19:11.685 [2024-11-29 13:05:42.962246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:42.962279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:42.976448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee0630 00:19:11.685 [2024-11-29 13:05:42.977841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:42.977874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:42.991836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edfdc0 00:19:11.685 [2024-11-29 13:05:42.993219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:42.993254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:43.008113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edf550 00:19:11.685 [2024-11-29 13:05:43.009777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:43.009813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:43.025721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edece0 00:19:11.685 [2024-11-29 13:05:43.027208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:43.027246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:43.042872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ede470 00:19:11.685 [2024-11-29 13:05:43.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:43.044362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:43.066810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eddc00 00:19:11.685 [2024-11-29 13:05:43.069767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.685 [2024-11-29 13:05:43.069797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.685 [2024-11-29 13:05:43.083112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ede470 00:19:11.686 [2024-11-29 13:05:43.085794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.085836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.099141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edece0 00:19:11.686 [2024-11-29 13:05:43.101694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.101728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.114798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edf550 00:19:11.686 [2024-11-29 13:05:43.117315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.117349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.130633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016edfdc0 00:19:11.686 [2024-11-29 13:05:43.133142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.133337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.146904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee0630 00:19:11.686 [2024-11-29 13:05:43.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.163436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee0ea0 00:19:11.686 [2024-11-29 13:05:43.165891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.165948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.179625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee1710 00:19:11.686 [2024-11-29 13:05:43.182144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.686 [2024-11-29 13:05:43.182350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:11.686 [2024-11-29 13:05:43.196230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee1f80 00:19:11.946 [2024-11-29 13:05:43.198651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.198684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.212488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee27f0 00:19:11.946 [2024-11-29 13:05:43.214830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.214865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.228087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee3060 00:19:11.946 [2024-11-29 13:05:43.230224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.230256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.243588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee38d0 00:19:11.946 [2024-11-29 13:05:43.245802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.245833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.259332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee4140 00:19:11.946 [2024-11-29 13:05:43.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.261602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.274883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee49b0 00:19:11.946 [2024-11-29 13:05:43.277144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.277318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.290769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee5220 00:19:11.946 [2024-11-29 13:05:43.292995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.293030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.306421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee5a90 00:19:11.946 [2024-11-29 13:05:43.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.308752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.322186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee6300 00:19:11.946 [2024-11-29 13:05:43.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.324479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.338164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee6b70 00:19:11.946 [2024-11-29 13:05:43.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.353780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee73e0 00:19:11.946 [2024-11-29 13:05:43.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.356111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.369575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee7c50 00:19:11.946 [2024-11-29 13:05:43.371940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.385997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee84c0 00:19:11.946 [2024-11-29 13:05:43.388340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.388373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.402239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee8d30 00:19:11.946 [2024-11-29 13:05:43.404587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.404621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.418245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee95a0 00:19:11.946 [2024-11-29 13:05:43.420508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.420558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.433987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ee9e10 00:19:11.946 [2024-11-29 13:05:43.436094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.436130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.946 [2024-11-29 13:05:43.449639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eea680 00:19:11.946 [2024-11-29 13:05:43.451851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.946 [2024-11-29 13:05:43.452051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.465244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeaef0 00:19:12.207 [2024-11-29 13:05:43.467265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.467447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.481095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeb760 00:19:12.207 [2024-11-29 13:05:43.483048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.483084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.497631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eebfd0 00:19:12.207 [2024-11-29 13:05:43.499816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.499855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.513454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eec840 00:19:12.207 [2024-11-29 13:05:43.515379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.515414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.529113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eed0b0 00:19:12.207 [2024-11-29 13:05:43.531282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.531320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.546582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eed920 00:19:12.207 [2024-11-29 13:05:43.548806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.548838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.563746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eee190 00:19:12.207 [2024-11-29 13:05:43.565628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.580976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eeea00 00:19:12.207 [2024-11-29 13:05:43.582915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.582951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.597975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eef270 00:19:12.207 [2024-11-29 13:05:43.599970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.615370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016eefae0 00:19:12.207 [2024-11-29 13:05:43.617530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.631184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef0350 00:19:12.207 [2024-11-29 13:05:43.632933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.645807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef0bc0 00:19:12.207 [2024-11-29 13:05:43.647896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.648114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.662900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef1430 00:19:12.207 [2024-11-29 13:05:43.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.207 [2024-11-29 13:05:43.664786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:12.207 [2024-11-29 13:05:43.680099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef1ca0 00:19:12.207 [2024-11-29 13:05:43.681999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.208 [2024-11-29 13:05:43.682033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:12.208 [2024-11-29 13:05:43.696044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef2510 00:19:12.208 [2024-11-29 13:05:43.697777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.208 [2024-11-29 13:05:43.697809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:12.208 [2024-11-29 13:05:43.710523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef2d80 00:19:12.208 [2024-11-29 13:05:43.712293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.208 [2024-11-29 13:05:43.712325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:12.467 [2024-11-29 13:05:43.724920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef35f0 00:19:12.467 [2024-11-29 13:05:43.726495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.467 [2024-11-29 13:05:43.726529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:12.467 [2024-11-29 13:05:43.739052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef3e60 00:19:12.467 [2024-11-29 13:05:43.740959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.467 [2024-11-29 13:05:43.740993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:12.467 [2024-11-29 13:05:43.753806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19cfae0) with pdu=0x200016ef46d0 00:19:12.467 [2024-11-29 13:05:43.755670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.467 [2024-11-29 13:05:43.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:12.467 15624.00 IOPS, 61.03 MiB/s 00:19:12.467 Latency(us) 00:19:12.467 [2024-11-29T13:05:43.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.467 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.467 nvme0n1 : 2.01 15676.58 61.24 0.00 0.00 8158.84 5898.24 31457.28 00:19:12.467 [2024-11-29T13:05:43.982Z] =================================================================================================================== 00:19:12.467 [2024-11-29T13:05:43.982Z] Total : 15676.58 61.24 0.00 0.00 8158.84 5898.24 31457.28 00:19:12.467 { 00:19:12.467 "results": [ 00:19:12.467 { 00:19:12.467 "job": "nvme0n1", 00:19:12.467 "core_mask": "0x2", 00:19:12.467 "workload": "randwrite", 00:19:12.467 "status": "finished", 00:19:12.467 "queue_depth": 128, 00:19:12.467 "io_size": 4096, 00:19:12.467 "runtime": 2.009495, 00:19:12.467 "iops": 15676.575458013083, 00:19:12.467 "mibps": 61.23662288286361, 00:19:12.467 "io_failed": 0, 00:19:12.467 "io_timeout": 0, 00:19:12.467 "avg_latency_us": 8158.836917713737, 00:19:12.467 "min_latency_us": 5898.24, 00:19:12.467 "max_latency_us": 31457.28 00:19:12.467 } 00:19:12.467 ], 00:19:12.467 "core_count": 1 00:19:12.467 } 00:19:12.467 13:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:12.467 13:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:12.467 13:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:12.467 | .driver_specific 00:19:12.467 | .nvme_error 00:19:12.467 | .status_code 00:19:12.467 | .command_transient_transport_error' 00:19:12.467 13:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80575 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80575 ']' 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80575 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80575 00:19:12.727 killing process with pid 80575 00:19:12.727 Received shutdown signal, test time was about 2.000000 seconds 00:19:12.727 00:19:12.727 Latency(us) 00:19:12.727 [2024-11-29T13:05:44.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.727 [2024-11-29T13:05:44.242Z] =================================================================================================================== 00:19:12.727 [2024-11-29T13:05:44.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80575' 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80575 00:19:12.727 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80575 00:19:12.986 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80628 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80628 /var/tmp/bperf.sock 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80628 ']' 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:12.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.987 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:12.987 [2024-11-29 13:05:44.420985] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:12.987 [2024-11-29 13:05:44.421288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80628 ] 00:19:12.987 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:12.987 Zero copy mechanism will not be used. 00:19:13.246 [2024-11-29 13:05:44.566952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.246 [2024-11-29 13:05:44.611998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.246 [2024-11-29 13:05:44.680732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.246 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.246 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:13.246 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:13.246 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:13.504 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:13.504 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.504 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.504 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.504 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.505 13:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:14.073 nvme0n1 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:14.073 13:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:14.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:14.073 Zero copy mechanism will not be used. 00:19:14.073 Running I/O for 2 seconds... 00:19:14.073 [2024-11-29 13:05:45.473469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.073 [2024-11-29 13:05:45.473600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.073 [2024-11-29 13:05:45.473628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.073 [2024-11-29 13:05:45.479148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.073 [2024-11-29 13:05:45.479248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.073 [2024-11-29 13:05:45.479270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.073 [2024-11-29 13:05:45.484589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.073 [2024-11-29 13:05:45.484702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.073 [2024-11-29 13:05:45.484723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.489632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.489744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.494626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.494930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.494951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.500208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.500300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.500320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.505222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.510217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.510310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.510330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.515599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.515694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.520525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.520620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.520640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.525543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.525655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.525675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.530617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.530866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.530886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.536179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.536274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.536294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.541265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.541362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.541383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.546362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.546642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.546662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.552030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.552145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.552165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.557136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.557253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.557273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.562370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.562488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.567676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.567788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.567808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.572706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.572818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.572838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.577673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.577947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.577982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.074 [2024-11-29 13:05:45.583233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.074 [2024-11-29 13:05:45.583344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.074 [2024-11-29 13:05:45.583365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.588267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.588375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.588395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.593543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.593813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.599098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.599209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.599229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.604163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.604251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.604271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.609004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.609117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.609137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.613788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.333 [2024-11-29 13:05:45.613875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.333 [2024-11-29 13:05:45.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.333 [2024-11-29 13:05:45.618622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.618729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.618748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.623646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.623740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.623759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.628556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.628848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.633818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.633943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.633963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.638640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.638749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.638769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.643417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.643524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.648176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.648262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.648281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.652982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.653069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.653088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.657744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.657848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.657867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.662486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.662594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.662613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.667351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.667461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.667480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.672117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.672227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.672247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.676960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.677071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.677091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.681776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.681884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.681916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.686603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.686696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.686715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.691411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.691517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.691536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.696443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.696703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.696723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.701730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.701838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.701858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.706814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.706973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.711958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.712076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.717157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.717265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.717284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.722134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.722233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.722252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.727093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.727200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.727220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.732068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.732182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.732201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.737027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.737143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.737162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.741956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.742063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.742082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.747902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.334 [2024-11-29 13:05:45.748181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.334 [2024-11-29 13:05:45.748203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.334 [2024-11-29 13:05:45.754104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.754187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.760228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.760344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.760366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.766258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.766437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.766458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.772146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.772224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.772245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.777485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.777587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.777605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.782786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.782892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.782911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.788294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.788435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.793517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.793611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.793630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.798435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.798545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.798564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.803384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.803608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.803627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.808317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.808446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.813151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.813261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.813280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.818105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.818255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.823062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.823174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.828009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.828124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.828143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.832859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.832997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.335 [2024-11-29 13:05:45.838044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.335 [2024-11-29 13:05:45.838137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.335 [2024-11-29 13:05:45.838156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.842790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.843102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.843123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.848065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.848344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.853126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.853376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.858263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.858536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.858679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.863548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.863796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.863969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.868481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.868767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.869037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.873681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.873967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.874106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.878742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.879116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.883959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.884234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.884382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.888966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.889215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.889355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.893983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.894249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.894455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.595 [2024-11-29 13:05:45.899010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.595 [2024-11-29 13:05:45.899285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.595 [2024-11-29 13:05:45.899453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.904193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.904447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.904645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.909169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.909462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.909622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.914170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.914430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.914452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.919331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.919597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.919762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.924519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.924628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.924648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.929213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.929322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.929342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.933951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.934075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.938676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.938787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.938807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.943368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.943456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.948238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.948350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.948369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.952980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.953090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.953110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.957766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.957855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.957875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.962524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.962657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.962677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.967453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.967555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.967574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.972146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.972243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.972262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.976960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.977056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.977074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.981686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.981797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.981816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.987002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.987119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.987140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.991940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.992048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.992068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:45.996862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:45.997136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:45.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:46.002049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:46.002313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.596 [2024-11-29 13:05:46.002574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.596 [2024-11-29 13:05:46.007110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.596 [2024-11-29 13:05:46.007400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.007658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.012157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.012425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.017106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.017394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.017538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.022239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.022514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.022706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.027475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.027764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.032577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.032833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.033002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.037592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.037861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.038071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.042553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.042643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.042663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.047406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.047505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.047524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.052221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.052331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.052350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.057026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.057133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.057152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.061789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.061906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.061925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.066598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.066725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.071404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.071501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.076207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.076305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.076324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.081500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.081595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.086710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.086806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.086826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.092413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.092686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.092707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.098665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.098782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.098802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.597 [2024-11-29 13:05:46.104307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.597 [2024-11-29 13:05:46.104449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.597 [2024-11-29 13:05:46.104468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.109757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.109863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.109882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.115108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.115215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.115236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.120566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.120662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.120681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.125615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.125722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.125741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.130464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.130581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.130600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.135422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.135645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.140654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.140787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.145568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.145655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.145674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.150364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.150460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.150478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.155191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.155302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.155321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.859 [2024-11-29 13:05:46.160046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.859 [2024-11-29 13:05:46.160137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.859 [2024-11-29 13:05:46.160156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.164849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.164958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.169573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.169680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.169699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.174457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.174565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.174584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.179284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.179440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.179459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.184211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.184322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.189077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.189170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.189189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.193900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.194007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.194025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.198775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.198884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.198914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.203516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.203777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.203797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.208722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.208848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.208867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.213571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.213677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.213695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.218412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.218517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.218537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.223218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.223343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.223362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.228084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.228188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.228207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.232867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.232992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.233011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.237638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.237727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.237746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.242428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.242535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.242554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.247329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.247436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.252067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.252184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.252203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.256821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.256944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.256963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.261559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.261670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.261688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.266370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.266494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.271145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.271242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.271261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.275905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.275995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.280647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.280755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.280774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.285447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.860 [2024-11-29 13:05:46.290337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.860 [2024-11-29 13:05:46.290424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.860 [2024-11-29 13:05:46.290442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.295105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.295218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.295237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.299946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.300048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.300068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.304700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.304807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.304827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.309523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.309620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.309639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.314265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.314374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.314393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.319064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.319171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.319190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.323889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.324151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.324171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.329017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.329109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.329128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.333838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.333959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.333979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.338640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.338749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.338768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.343427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.343697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.343717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.348614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.348722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.348741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.353404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.353495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.353514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.358212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.358308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.362985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.363091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.363110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.861 [2024-11-29 13:05:46.367869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:14.861 [2024-11-29 13:05:46.367990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.861 [2024-11-29 13:05:46.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.372895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.372987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.373006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.377979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.378069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.378088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.383012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.383129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.387986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.388101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.392836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.392960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.392979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.397600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.397870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.397889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.402634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.402742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.402760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.407438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.407539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.407558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.412261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.412370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.412388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.417076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.417185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.417204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.421837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.422119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.422139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.426851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.426997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.427022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.431655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.431761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.431779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.436396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.436499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.436517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.441283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.441372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.446063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.446163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.446182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.450857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.450987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.451006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.455598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.455706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.455725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.460434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.460543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.460562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.465209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.465319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.465338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.123 [2024-11-29 13:05:46.469984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.123 [2024-11-29 13:05:46.470089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.123 [2024-11-29 13:05:46.470109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.123 6184.00 IOPS, 773.00 MiB/s [2024-11-29T13:05:46.638Z] [2024-11-29 13:05:46.476118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.476211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.476231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.481630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.481873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.481893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.486693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.486790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.491551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.491652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.491671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.496366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.496476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.496495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.501187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.501300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.505954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.506074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.506092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.510743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.510868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.515548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.515657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.515675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.520396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.520507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.520526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.525146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.525256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.525275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.529915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.530020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.530039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.534696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.534785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.534803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.539542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.539649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.539667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.544340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.544450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.544469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.549151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.549261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.549280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.553934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.554040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.554058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.558811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.558933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.558952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.563559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.563685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.568367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.568476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.568495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.573180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.573286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.573306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.578022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.578137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.578156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.582763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.582872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.582902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.587576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.587680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.587699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.592398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.592504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.592523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.597626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.597873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.597893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.124 [2024-11-29 13:05:46.603190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.124 [2024-11-29 13:05:46.603318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.124 [2024-11-29 13:05:46.603338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.125 [2024-11-29 13:05:46.608429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.125 [2024-11-29 13:05:46.608541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.125 [2024-11-29 13:05:46.608560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.125 [2024-11-29 13:05:46.614270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.125 [2024-11-29 13:05:46.614419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.125 [2024-11-29 13:05:46.614440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.125 [2024-11-29 13:05:46.620063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.125 [2024-11-29 13:05:46.620168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.125 [2024-11-29 13:05:46.620189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.125 [2024-11-29 13:05:46.625578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.125 [2024-11-29 13:05:46.625808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.125 [2024-11-29 13:05:46.625828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.125 [2024-11-29 13:05:46.631112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.125 [2024-11-29 13:05:46.631214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.125 [2024-11-29 13:05:46.631234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.636245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.636342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.636361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.641274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.641371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.641391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.646400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.646506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.646525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.651369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.651477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.651497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.656217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.656315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.656335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.661450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.661703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.661723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.666559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.666692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.671524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.671645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.671665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.676427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.676558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.681613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.681841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.681861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.686638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.686751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.686771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.691627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.691752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.691772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.696806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.696930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.696950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.701735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.701975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.706875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.707019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.707040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.386 [2024-11-29 13:05:46.712136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.386 [2024-11-29 13:05:46.712250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.386 [2024-11-29 13:05:46.712269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.717101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.717209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.717228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.722217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.722319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.722339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.727514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.727622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.727642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.732725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.732836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.738012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.738125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.738145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.743271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.743416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.743451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.748658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.748772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.748792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.753808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.754092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.754112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.759205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.759351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.759385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.764634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.764747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.764766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.769705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.769990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.775059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.775150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.775172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.780156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.780248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.780267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.785138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.785266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.790343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.790454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.790473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.795393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.795500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.795518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.800403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.800496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.800515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.805222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.805331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.810004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.810097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.810116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.814835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.814940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.814959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.387 [2024-11-29 13:05:46.819680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.387 [2024-11-29 13:05:46.819770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.387 [2024-11-29 13:05:46.819789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.824536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.824626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.824646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.829406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.829640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.829659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.834364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.834454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.834473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.839152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.839264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.839298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.843977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.844068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.844086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.848726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.848825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.853482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.853715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.858508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.858613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.863450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.863573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.863592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.868305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.868412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.868430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.873069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.873178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.873197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.877841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.878108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.878128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.882763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.882871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.882889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.887563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.887670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.887689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.892360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.892468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.892486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.388 [2024-11-29 13:05:46.897134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.388 [2024-11-29 13:05:46.897222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.388 [2024-11-29 13:05:46.897240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.901906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.902010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.902029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.906689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.906799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.911370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.911476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.911495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.916117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.916226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.916245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.920874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.920991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.649 [2024-11-29 13:05:46.925659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.649 [2024-11-29 13:05:46.925769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.649 [2024-11-29 13:05:46.925788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.930354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.930463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.930481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.935118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.935228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.935247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.939846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.939949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.939968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.944579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.944811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.949436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.949532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.949551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.954263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.954354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.954372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.959046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.959119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.959138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.963786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.963892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.963911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.968458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.968708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.968727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.973499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.973644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.978283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.978393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.978412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.983079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.983165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.983184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.988094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.988204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.988223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.993185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.993290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.993309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:46.998149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:46.998259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:46.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.003090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:47.003197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:47.003216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.008087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:47.008185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:47.008205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.013228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:47.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:47.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.018297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:47.018401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:47.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.023328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.650 [2024-11-29 13:05:47.023581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.650 [2024-11-29 13:05:47.023600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.650 [2024-11-29 13:05:47.028404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.028511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.028530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.033293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.033397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.033415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.038095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.038181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.042845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.042975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.047512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.047778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.052451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.052560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.052578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.057157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.057265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.057284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.061958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.062053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.062071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.066659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.066756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.066775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.071317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.071546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.071565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.076263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.076370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.076389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.080945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.081038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.081057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.085656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.085761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.085780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.090352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.090461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.090480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.095120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.095225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.095244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.100076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.100180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.100199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.105576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.105674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.111040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.111142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.116751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.122882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.123038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.123060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.128441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.128549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.651 [2024-11-29 13:05:47.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.651 [2024-11-29 13:05:47.133724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.651 [2024-11-29 13:05:47.133833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.133852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.652 [2024-11-29 13:05:47.139219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.652 [2024-11-29 13:05:47.139332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.139360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.652 [2024-11-29 13:05:47.144286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.652 [2024-11-29 13:05:47.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.144410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.652 [2024-11-29 13:05:47.149114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.652 [2024-11-29 13:05:47.149213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.149232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.652 [2024-11-29 13:05:47.153937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.652 [2024-11-29 13:05:47.154040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.154059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.652 [2024-11-29 13:05:47.158946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.652 [2024-11-29 13:05:47.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.652 [2024-11-29 13:05:47.159101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.912 [2024-11-29 13:05:47.164178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.912 [2024-11-29 13:05:47.164288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-29 13:05:47.164307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.912 [2024-11-29 13:05:47.168986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.912 [2024-11-29 13:05:47.169092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-29 13:05:47.169110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.912 [2024-11-29 13:05:47.173830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.912 [2024-11-29 13:05:47.173945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-29 13:05:47.173964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.912 [2024-11-29 13:05:47.178813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.912 [2024-11-29 13:05:47.178935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-29 13:05:47.178955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.912 [2024-11-29 13:05:47.183609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.912 [2024-11-29 13:05:47.183851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-29 13:05:47.183871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.188567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.188662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.188681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.193350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.193452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.193470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.198404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.198502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.198521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.203143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.203261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.203280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.207912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.208001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.208021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.212679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.212789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.212808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.217683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.217797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.217816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.222627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.222735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.222754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.227607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.227871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.232538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.232649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.232667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.237377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.242236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.242346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.242365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.247123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.247214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.247233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.251894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.252150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.252168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.256846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.256950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.256969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.261609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.261717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.261736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.266434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.266541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.266559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.271218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.271376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.276005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.276120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.276138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.280756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.285528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.285635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.285653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.290332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.290439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.290457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.295150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.295241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.295261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.299963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.300056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.300075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.304732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.304842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.304860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.309459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.309566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.309585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-29 13:05:47.314255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.913 [2024-11-29 13:05:47.314364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-29 13:05:47.314383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.319025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.319130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.319149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.323731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.323993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.324012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.328689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.328798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.328816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.333436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.333542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.333560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.338212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.338318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.342991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.343090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.343109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.347728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.347974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.347994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.352703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.352814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.352832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.357505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.357610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.357629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.362342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.362448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.362466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.367333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.367504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.367523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.372272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.372392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.372410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.377152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.377257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.377276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.381944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.382041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.382060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.386763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.386868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.386887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.391651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.391933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.396672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.396779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.396798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.401432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.401538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.401557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.406248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.406338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.406357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.411049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.411164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.411186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.415803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.415908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.415940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-29 13:05:47.420554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:15.914 [2024-11-29 13:05:47.420664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-29 13:05:47.420682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.425394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.425499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.425517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.430205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.430300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.430319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.435019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.435118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.435137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.439750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.439858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.439877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.444455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.444570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.444589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.449198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.449294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.453906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.454032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.458666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.458753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.458771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.463404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.463517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.463536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.175 [2024-11-29 13:05:47.468200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.468293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.468311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.175 6230.00 IOPS, 778.75 MiB/s [2024-11-29T13:05:47.690Z] [2024-11-29 13:05:47.473891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19bc5b0) with pdu=0x200016eff3c8 00:19:16.175 [2024-11-29 13:05:47.473983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.175 [2024-11-29 13:05:47.474003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.175 00:19:16.175 Latency(us) 00:19:16.175 [2024-11-29T13:05:47.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.175 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:16.175 nvme0n1 : 2.00 6226.69 778.34 0.00 0.00 2564.38 1630.95 6315.29 00:19:16.175 [2024-11-29T13:05:47.690Z] =================================================================================================================== 00:19:16.175 [2024-11-29T13:05:47.690Z] Total : 6226.69 778.34 0.00 0.00 2564.38 1630.95 6315.29 00:19:16.175 { 00:19:16.175 "results": [ 00:19:16.175 { 00:19:16.175 "job": "nvme0n1", 00:19:16.175 "core_mask": "0x2", 00:19:16.175 "workload": "randwrite", 00:19:16.175 "status": "finished", 00:19:16.175 "queue_depth": 16, 00:19:16.175 "io_size": 131072, 00:19:16.175 "runtime": 2.003471, 00:19:16.175 "iops": 6226.693573303532, 00:19:16.175 "mibps": 778.3366966629414, 00:19:16.175 "io_failed": 0, 00:19:16.175 "io_timeout": 0, 00:19:16.175 "avg_latency_us": 2564.3759816359993, 00:19:16.175 "min_latency_us": 1630.9527272727273, 00:19:16.175 "max_latency_us": 6315.2872727272725 00:19:16.175 } 00:19:16.175 ], 00:19:16.175 "core_count": 1 00:19:16.175 } 00:19:16.175 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:16.175 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:16.175 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:16.175 | .driver_specific 00:19:16.175 | .nvme_error 00:19:16.175 | .status_code 00:19:16.175 | .command_transient_transport_error' 00:19:16.175 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80628 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80628 ']' 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80628 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80628 00:19:16.436 killing process with pid 80628 00:19:16.436 Received shutdown signal, test time was about 2.000000 seconds 00:19:16.436 00:19:16.436 Latency(us) 00:19:16.436 [2024-11-29T13:05:47.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.436 [2024-11-29T13:05:47.951Z] =================================================================================================================== 00:19:16.436 [2024-11-29T13:05:47.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80628' 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80628 00:19:16.436 13:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80628 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80441 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80441 ']' 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80441 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80441 00:19:16.695 killing process with pid 80441 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80441' 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80441 00:19:16.695 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80441 00:19:16.955 ************************************ 00:19:16.955 END TEST nvmf_digest_error 00:19:16.955 ************************************ 00:19:16.955 00:19:16.955 real 0m16.875s 00:19:16.955 user 0m31.554s 00:19:16.955 sys 0m5.413s 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.955 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.214 rmmod nvme_tcp 00:19:17.214 rmmod nvme_fabrics 00:19:17.214 rmmod nvme_keyring 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80441 ']' 00:19:17.214 Process with pid 80441 is not found 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80441 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80441 ']' 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80441 00:19:17.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80441) - No such process 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80441 is not found' 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:17.214 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.215 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:17.473 00:19:17.473 real 0m34.769s 00:19:17.473 user 1m3.683s 00:19:17.473 sys 0m11.580s 00:19:17.473 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.473 ************************************ 00:19:17.473 END TEST nvmf_digest 00:19:17.473 ************************************ 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.474 ************************************ 00:19:17.474 START TEST nvmf_host_multipath 00:19:17.474 ************************************ 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:17.474 * Looking for test storage... 00:19:17.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.474 13:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.733 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.734 --rc genhtml_branch_coverage=1 00:19:17.734 --rc genhtml_function_coverage=1 00:19:17.734 --rc genhtml_legend=1 00:19:17.734 --rc geninfo_all_blocks=1 00:19:17.734 --rc geninfo_unexecuted_blocks=1 00:19:17.734 00:19:17.734 ' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.734 --rc genhtml_branch_coverage=1 00:19:17.734 --rc genhtml_function_coverage=1 00:19:17.734 --rc genhtml_legend=1 00:19:17.734 --rc geninfo_all_blocks=1 00:19:17.734 --rc geninfo_unexecuted_blocks=1 00:19:17.734 00:19:17.734 ' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.734 --rc genhtml_branch_coverage=1 00:19:17.734 --rc genhtml_function_coverage=1 00:19:17.734 --rc genhtml_legend=1 00:19:17.734 --rc geninfo_all_blocks=1 00:19:17.734 --rc geninfo_unexecuted_blocks=1 00:19:17.734 00:19:17.734 ' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.734 --rc genhtml_branch_coverage=1 00:19:17.734 --rc genhtml_function_coverage=1 00:19:17.734 --rc genhtml_legend=1 00:19:17.734 --rc geninfo_all_blocks=1 00:19:17.734 --rc geninfo_unexecuted_blocks=1 00:19:17.734 00:19:17.734 ' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.734 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.734 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:17.735 Cannot find device "nvmf_init_br" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:17.735 Cannot find device "nvmf_init_br2" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:17.735 Cannot find device "nvmf_tgt_br" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.735 Cannot find device "nvmf_tgt_br2" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:17.735 Cannot find device "nvmf_init_br" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:17.735 Cannot find device "nvmf_init_br2" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:17.735 Cannot find device "nvmf_tgt_br" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:17.735 Cannot find device "nvmf_tgt_br2" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:17.735 Cannot find device "nvmf_br" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:17.735 Cannot find device "nvmf_init_if" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:17.735 Cannot find device "nvmf_init_if2" 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:17.735 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:17.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:19:17.994 00:19:17.994 --- 10.0.0.3 ping statistics --- 00:19:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.994 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:17.994 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:17.994 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:19:17.994 00:19:17.994 --- 10.0.0.4 ping statistics --- 00:19:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.994 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:17.994 00:19:17.994 --- 10.0.0.1 ping statistics --- 00:19:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.994 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:17.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:17.994 00:19:17.994 --- 10.0.0.2 ping statistics --- 00:19:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.994 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80939 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:17.994 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80939 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80939 ']' 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.995 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:18.253 [2024-11-29 13:05:49.531857] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:19:18.253 [2024-11-29 13:05:49.531973] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.253 [2024-11-29 13:05:49.684174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.253 [2024-11-29 13:05:49.745643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.253 [2024-11-29 13:05:49.745725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.253 [2024-11-29 13:05:49.745748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.253 [2024-11-29 13:05:49.745759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.253 [2024-11-29 13:05:49.745768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.253 [2024-11-29 13:05:49.747331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.253 [2024-11-29 13:05:49.747352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.511 [2024-11-29 13:05:49.822072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.511 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80939 00:19:18.512 13:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.770 [2024-11-29 13:05:50.241931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.770 13:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:19.338 Malloc0 00:19:19.338 13:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:19.597 13:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.597 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.856 [2024-11-29 13:05:51.326322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.856 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:20.116 [2024-11-29 13:05:51.554599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:20.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80983 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80983 /var/tmp/bdevperf.sock 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80983 ']' 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.116 13:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:21.053 13:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.053 13:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:21.053 13:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:21.327 13:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:21.623 Nvme0n1 00:19:21.623 13:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:22.190 Nvme0n1 00:19:22.190 13:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:22.190 13:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:23.125 13:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:23.125 13:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:23.384 13:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:23.642 13:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:23.642 13:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:23.642 13:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81034 00:19:23.642 13:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.221 Attaching 4 probes... 00:19:30.221 @path[10.0.0.3, 4421]: 16867 00:19:30.221 @path[10.0.0.3, 4421]: 17334 00:19:30.221 @path[10.0.0.3, 4421]: 17336 00:19:30.221 @path[10.0.0.3, 4421]: 17870 00:19:30.221 @path[10.0.0.3, 4421]: 19274 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81034 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:30.221 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:30.478 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:30.478 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81147 00:19:30.478 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:30.478 13:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:37.043 13:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:37.043 13:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.043 Attaching 4 probes... 00:19:37.043 @path[10.0.0.3, 4420]: 16398 00:19:37.043 @path[10.0.0.3, 4420]: 16901 00:19:37.043 @path[10.0.0.3, 4420]: 17496 00:19:37.043 @path[10.0.0.3, 4420]: 17529 00:19:37.043 @path[10.0.0.3, 4420]: 17297 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81147 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:37.043 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:37.313 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:37.313 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81264 00:19:37.313 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:37.313 13:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:43.889 13:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:43.889 13:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:43.889 Attaching 4 probes... 00:19:43.889 @path[10.0.0.3, 4421]: 13890 00:19:43.889 @path[10.0.0.3, 4421]: 16272 00:19:43.889 @path[10.0.0.3, 4421]: 15332 00:19:43.889 @path[10.0.0.3, 4421]: 16921 00:19:43.889 @path[10.0.0.3, 4421]: 14289 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81264 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:43.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:44.148 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:44.148 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81378 00:19:44.148 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:44.148 13:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.717 Attaching 4 probes... 00:19:50.717 00:19:50.717 00:19:50.717 00:19:50.717 00:19:50.717 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81378 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:50.717 13:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:50.976 13:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:51.236 13:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:51.236 13:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81490 00:19:51.236 13:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:51.236 13:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.804 Attaching 4 probes... 00:19:57.804 @path[10.0.0.3, 4421]: 14909 00:19:57.804 @path[10.0.0.3, 4421]: 15095 00:19:57.804 @path[10.0.0.3, 4421]: 15120 00:19:57.804 @path[10.0.0.3, 4421]: 18624 00:19:57.804 @path[10.0.0.3, 4421]: 19405 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81490 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.804 13:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:57.804 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:58.740 13:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:58.740 13:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81614 00:19:58.740 13:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:58.740 13:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:05.305 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:05.305 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:05.305 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:05.305 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.305 Attaching 4 probes... 00:20:05.306 @path[10.0.0.3, 4420]: 18512 00:20:05.306 @path[10.0.0.3, 4420]: 18995 00:20:05.306 @path[10.0.0.3, 4420]: 18848 00:20:05.306 @path[10.0.0.3, 4420]: 18944 00:20:05.306 @path[10.0.0.3, 4420]: 18950 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81614 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:05.306 [2024-11-29 13:06:36.600541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:05.306 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:05.564 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:12.139 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:12.139 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81788 00:20:12.139 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:12.139 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:17.411 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:17.411 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:17.670 Attaching 4 probes... 00:20:17.670 @path[10.0.0.3, 4421]: 18455 00:20:17.670 @path[10.0.0.3, 4421]: 18776 00:20:17.670 @path[10.0.0.3, 4421]: 18280 00:20:17.670 @path[10.0.0.3, 4421]: 18991 00:20:17.670 @path[10.0.0.3, 4421]: 18758 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81788 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80983 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80983 ']' 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80983 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.670 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80983 00:20:17.929 killing process with pid 80983 00:20:17.929 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:17.929 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:17.929 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80983' 00:20:17.929 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80983 00:20:17.929 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80983 00:20:17.929 { 00:20:17.929 "results": [ 00:20:17.929 { 00:20:17.929 "job": "Nvme0n1", 00:20:17.929 "core_mask": "0x4", 00:20:17.929 "workload": "verify", 00:20:17.929 "status": "terminated", 00:20:17.929 "verify_range": { 00:20:17.929 "start": 0, 00:20:17.929 "length": 16384 00:20:17.929 }, 00:20:17.929 "queue_depth": 128, 00:20:17.929 "io_size": 4096, 00:20:17.929 "runtime": 55.534566, 00:20:17.929 "iops": 7576.416461055985, 00:20:17.929 "mibps": 29.59537680099994, 00:20:17.929 "io_failed": 0, 00:20:17.929 "io_timeout": 0, 00:20:17.929 "avg_latency_us": 16864.12709915967, 00:20:17.929 "min_latency_us": 1295.8254545454545, 00:20:17.929 "max_latency_us": 7015926.69090909 00:20:17.929 } 00:20:17.929 ], 00:20:17.929 "core_count": 1 00:20:17.929 } 00:20:18.197 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80983 00:20:18.197 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:18.197 [2024-11-29 13:05:51.637323] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:20:18.197 [2024-11-29 13:05:51.637472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80983 ] 00:20:18.197 [2024-11-29 13:05:51.785779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.197 [2024-11-29 13:05:51.846122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.197 [2024-11-29 13:05:51.908093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:18.197 Running I/O for 90 seconds... 00:20:18.197 6919.00 IOPS, 27.03 MiB/s [2024-11-29T13:06:49.712Z] 7996.50 IOPS, 31.24 MiB/s [2024-11-29T13:06:49.712Z] 8243.00 IOPS, 32.20 MiB/s [2024-11-29T13:06:49.712Z] 8344.50 IOPS, 32.60 MiB/s [2024-11-29T13:06:49.712Z] 8406.60 IOPS, 32.84 MiB/s [2024-11-29T13:06:49.712Z] 8498.83 IOPS, 33.20 MiB/s [2024-11-29T13:06:49.712Z] 8663.14 IOPS, 33.84 MiB/s [2024-11-29T13:06:49.712Z] 8721.50 IOPS, 34.07 MiB/s [2024-11-29T13:06:49.712Z] [2024-11-29 13:06:01.832590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.832970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.832983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.197 [2024-11-29 13:06:01.833494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.833970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.833985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.834032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.834049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.834069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.834084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.834104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.197 [2024-11-29 13:06:01.834119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.197 [2024-11-29 13:06:01.834156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.834615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.834918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.198 [2024-11-29 13:06:01.835393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:18.198 [2024-11-29 13:06:01.835919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.198 [2024-11-29 13:06:01.835939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.835976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.836788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.836982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.837465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.199 [2024-11-29 13:06:01.839199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.839247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.839291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.839339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.839389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.199 [2024-11-29 13:06:01.839450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.199 [2024-11-29 13:06:01.839466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.839487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.839522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.839537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.839955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:01.840508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:01.840538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.200 8650.00 IOPS, 33.79 MiB/s [2024-11-29T13:06:49.715Z] 8597.00 IOPS, 33.58 MiB/s [2024-11-29T13:06:49.715Z] 8601.64 IOPS, 33.60 MiB/s [2024-11-29T13:06:49.715Z] 8620.17 IOPS, 33.67 MiB/s [2024-11-29T13:06:49.715Z] 8627.23 IOPS, 33.70 MiB/s [2024-11-29T13:06:49.715Z] 8631.00 IOPS, 33.71 MiB/s [2024-11-29T13:06:49.715Z] [2024-11-29 13:06:08.481349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.481973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.481991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.200 [2024-11-29 13:06:08.482005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.200 [2024-11-29 13:06:08.482265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.200 [2024-11-29 13:06:08.482278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.482810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.482852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.482896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.482930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.482949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.201 [2024-11-29 13:06:08.483483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:18.201 [2024-11-29 13:06:08.483502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.201 [2024-11-29 13:06:08.483515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.483751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.483947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.483980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.202 [2024-11-29 13:06:08.484611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:18.202 [2024-11-29 13:06:08.484917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.202 [2024-11-29 13:06:08.484939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.484959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.484973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.484991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.485435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.485448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.203 [2024-11-29 13:06:08.486261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:08.486950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:08.486993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.203 8517.93 IOPS, 33.27 MiB/s [2024-11-29T13:06:49.718Z] 8075.62 IOPS, 31.55 MiB/s [2024-11-29T13:06:49.718Z] 8087.18 IOPS, 31.59 MiB/s [2024-11-29T13:06:49.718Z] 8058.33 IOPS, 31.48 MiB/s [2024-11-29T13:06:49.718Z] 8062.84 IOPS, 31.50 MiB/s [2024-11-29T13:06:49.718Z] 8066.50 IOPS, 31.51 MiB/s [2024-11-29T13:06:49.718Z] 8038.95 IOPS, 31.40 MiB/s [2024-11-29T13:06:49.718Z] [2024-11-29 13:06:15.612122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:15.612257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:15.612323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:15.612355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:15.612386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.203 [2024-11-29 13:06:15.612418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.203 [2024-11-29 13:06:15.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.612764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.612796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.612831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.612863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.612939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.612994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.204 [2024-11-29 13:06:15.613533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.613968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:18.204 [2024-11-29 13:06:15.613988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.204 [2024-11-29 13:06:15.614001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.614035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.614101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.205 [2024-11-29 13:06:15.614862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.614897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.614919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.614960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.205 [2024-11-29 13:06:15.615512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.205 [2024-11-29 13:06:15.615535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.615549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.615881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.615916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.615963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.615988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.206 [2024-11-29 13:06:15.616770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.206 [2024-11-29 13:06:15.616959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.206 [2024-11-29 13:06:15.616981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:15.616995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:15.617030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:15.617064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:15.617338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:15.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:18.207 8025.55 IOPS, 31.35 MiB/s [2024-11-29T13:06:49.722Z] 7676.61 IOPS, 29.99 MiB/s [2024-11-29T13:06:49.722Z] 7356.75 IOPS, 28.74 MiB/s [2024-11-29T13:06:49.722Z] 7062.48 IOPS, 27.59 MiB/s [2024-11-29T13:06:49.722Z] 6790.85 IOPS, 26.53 MiB/s [2024-11-29T13:06:49.722Z] 6539.33 IOPS, 25.54 MiB/s [2024-11-29T13:06:49.722Z] 6305.79 IOPS, 24.63 MiB/s [2024-11-29T13:06:49.722Z] 6103.07 IOPS, 23.84 MiB/s [2024-11-29T13:06:49.722Z] 6150.70 IOPS, 24.03 MiB/s [2024-11-29T13:06:49.722Z] 6199.00 IOPS, 24.21 MiB/s [2024-11-29T13:06:49.722Z] 6236.53 IOPS, 24.36 MiB/s [2024-11-29T13:06:49.722Z] 6309.73 IOPS, 24.65 MiB/s [2024-11-29T13:06:49.722Z] 6408.74 IOPS, 25.03 MiB/s [2024-11-29T13:06:49.722Z] 6496.83 IOPS, 25.38 MiB/s [2024-11-29T13:06:49.722Z] [2024-11-29 13:06:29.035653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.035957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.207 [2024-11-29 13:06:29.036406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.207 [2024-11-29 13:06:29.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.207 [2024-11-29 13:06:29.036568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.036709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.036975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.036987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.208 [2024-11-29 13:06:29.037344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.208 [2024-11-29 13:06:29.037599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.208 [2024-11-29 13:06:29.037612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.037966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.037991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.209 [2024-11-29 13:06:29.038457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.209 [2024-11-29 13:06:29.038635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.209 [2024-11-29 13:06:29.038648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da310 is same with the state(6) to be set 00:20:18.209 [2024-11-29 13:06:29.038680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.209 [2024-11-29 13:06:29.038691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.038745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12312 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.038788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.038831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12328 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.038875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12336 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.038929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12344 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.038982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.038996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12360 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12368 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12376 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12392 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12400 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12408 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11848 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11856 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11864 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11880 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11888 len:8 PRP1 0x0 PRP2 0x0 00:20:18.210 [2024-11-29 13:06:29.039684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.210 [2024-11-29 13:06:29.039696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.210 [2024-11-29 13:06:29.039705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.210 [2024-11-29 13:06:29.039715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11896 len:8 PRP1 0x0 PRP2 0x0 00:20:18.211 [2024-11-29 13:06:29.039727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.211 [2024-11-29 13:06:29.039738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.211 [2024-11-29 13:06:29.039747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.211 [2024-11-29 13:06:29.039756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:8 PRP1 0x0 PRP2 0x0 00:20:18.211 [2024-11-29 13:06:29.039776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.211 [2024-11-29 13:06:29.040942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:18.211 [2024-11-29 13:06:29.041027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.211 [2024-11-29 13:06:29.041048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.211 [2024-11-29 13:06:29.041078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184b1e0 (9): Bad file descriptor 00:20:18.211 [2024-11-29 13:06:29.041497] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.211 [2024-11-29 13:06:29.041530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184b1e0 with addr=10.0.0.3, port=4421 00:20:18.211 [2024-11-29 13:06:29.041546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184b1e0 is same with the state(6) to be set 00:20:18.211 [2024-11-29 13:06:29.041581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184b1e0 (9): Bad file descriptor 00:20:18.211 [2024-11-29 13:06:29.041611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:18.211 [2024-11-29 13:06:29.041625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:18.211 [2024-11-29 13:06:29.041638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:18.211 [2024-11-29 13:06:29.041651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:18.211 [2024-11-29 13:06:29.041663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:18.211 6574.72 IOPS, 25.68 MiB/s [2024-11-29T13:06:49.726Z] 6647.30 IOPS, 25.97 MiB/s [2024-11-29T13:06:49.726Z] 6719.42 IOPS, 26.25 MiB/s [2024-11-29T13:06:49.726Z] 6791.03 IOPS, 26.53 MiB/s [2024-11-29T13:06:49.726Z] 6857.05 IOPS, 26.79 MiB/s [2024-11-29T13:06:49.726Z] 6920.83 IOPS, 27.03 MiB/s [2024-11-29T13:06:49.726Z] 6981.57 IOPS, 27.27 MiB/s [2024-11-29T13:06:49.726Z] 7038.47 IOPS, 27.49 MiB/s [2024-11-29T13:06:49.726Z] 7095.68 IOPS, 27.72 MiB/s [2024-11-29T13:06:49.726Z] 7148.18 IOPS, 27.92 MiB/s [2024-11-29T13:06:49.726Z] [2024-11-29 13:06:39.112915] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:18.211 7192.87 IOPS, 28.10 MiB/s [2024-11-29T13:06:49.726Z] 7246.49 IOPS, 28.31 MiB/s [2024-11-29T13:06:49.726Z] 7298.69 IOPS, 28.51 MiB/s [2024-11-29T13:06:49.726Z] 7344.43 IOPS, 28.69 MiB/s [2024-11-29T13:06:49.726Z] 7386.26 IOPS, 28.85 MiB/s [2024-11-29T13:06:49.726Z] 7426.06 IOPS, 29.01 MiB/s [2024-11-29T13:06:49.726Z] 7460.77 IOPS, 29.14 MiB/s [2024-11-29T13:06:49.726Z] 7493.89 IOPS, 29.27 MiB/s [2024-11-29T13:06:49.726Z] 7531.57 IOPS, 29.42 MiB/s [2024-11-29T13:06:49.726Z] 7563.65 IOPS, 29.55 MiB/s [2024-11-29T13:06:49.726Z] Received shutdown signal, test time was about 55.535202 seconds 00:20:18.211 00:20:18.211 Latency(us) 00:20:18.211 [2024-11-29T13:06:49.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.211 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.211 Verification LBA range: start 0x0 length 0x4000 00:20:18.211 Nvme0n1 : 55.53 7576.42 29.60 0.00 0.00 16864.13 1295.83 7015926.69 00:20:18.211 [2024-11-29T13:06:49.726Z] =================================================================================================================== 00:20:18.211 [2024-11-29T13:06:49.726Z] Total : 7576.42 29.60 0.00 0.00 16864.13 1295.83 7015926.69 00:20:18.211 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:18.470 rmmod nvme_tcp 00:20:18.470 rmmod nvme_fabrics 00:20:18.470 rmmod nvme_keyring 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80939 ']' 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80939 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80939 ']' 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80939 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80939 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.470 killing process with pid 80939 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80939' 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80939 00:20:18.470 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80939 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:18.729 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:18.987 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:18.988 00:20:18.988 real 1m1.517s 00:20:18.988 user 2m49.670s 00:20:18.988 sys 0m19.113s 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:18.988 ************************************ 00:20:18.988 END TEST nvmf_host_multipath 00:20:18.988 ************************************ 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.988 ************************************ 00:20:18.988 START TEST nvmf_timeout 00:20:18.988 ************************************ 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:18.988 * Looking for test storage... 00:20:18.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:20:18.988 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.247 --rc genhtml_branch_coverage=1 00:20:19.247 --rc genhtml_function_coverage=1 00:20:19.247 --rc genhtml_legend=1 00:20:19.247 --rc geninfo_all_blocks=1 00:20:19.247 --rc geninfo_unexecuted_blocks=1 00:20:19.247 00:20:19.247 ' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.247 --rc genhtml_branch_coverage=1 00:20:19.247 --rc genhtml_function_coverage=1 00:20:19.247 --rc genhtml_legend=1 00:20:19.247 --rc geninfo_all_blocks=1 00:20:19.247 --rc geninfo_unexecuted_blocks=1 00:20:19.247 00:20:19.247 ' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.247 --rc genhtml_branch_coverage=1 00:20:19.247 --rc genhtml_function_coverage=1 00:20:19.247 --rc genhtml_legend=1 00:20:19.247 --rc geninfo_all_blocks=1 00:20:19.247 --rc geninfo_unexecuted_blocks=1 00:20:19.247 00:20:19.247 ' 00:20:19.247 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:19.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.248 --rc genhtml_branch_coverage=1 00:20:19.248 --rc genhtml_function_coverage=1 00:20:19.248 --rc genhtml_legend=1 00:20:19.248 --rc geninfo_all_blocks=1 00:20:19.248 --rc geninfo_unexecuted_blocks=1 00:20:19.248 00:20:19.248 ' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.248 Cannot find device "nvmf_init_br" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:19.248 Cannot find device "nvmf_init_br2" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:19.248 Cannot find device "nvmf_tgt_br" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.248 Cannot find device "nvmf_tgt_br2" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:19.248 Cannot find device "nvmf_init_br" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:19.248 Cannot find device "nvmf_init_br2" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:19.248 Cannot find device "nvmf_tgt_br" 00:20:19.248 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:19.249 Cannot find device "nvmf_tgt_br2" 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:19.249 Cannot find device "nvmf_br" 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:19.249 Cannot find device "nvmf_init_if" 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:19.249 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:19.249 Cannot find device "nvmf_init_if2" 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.507 13:06:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.507 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:19.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:19.766 00:20:19.766 --- 10.0.0.3 ping statistics --- 00:20:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.766 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:19.766 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:19.766 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:20:19.766 00:20:19.766 --- 10.0.0.4 ping statistics --- 00:20:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.766 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:19.766 00:20:19.766 --- 10.0.0.1 ping statistics --- 00:20:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.766 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:19.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:19.766 00:20:19.766 --- 10.0.0.2 ping statistics --- 00:20:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.766 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82150 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82150 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82150 ']' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.766 13:06:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.766 [2024-11-29 13:06:51.122420] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:20:19.766 [2024-11-29 13:06:51.122492] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.766 [2024-11-29 13:06:51.267570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:20.025 [2024-11-29 13:06:51.322601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.025 [2024-11-29 13:06:51.322870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.025 [2024-11-29 13:06:51.323042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.025 [2024-11-29 13:06:51.323127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.025 [2024-11-29 13:06:51.323212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.025 [2024-11-29 13:06:51.324499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.025 [2024-11-29 13:06:51.324506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.025 [2024-11-29 13:06:51.378584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:20.961 [2024-11-29 13:06:52.441384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.961 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:21.528 Malloc0 00:20:21.528 13:06:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.787 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.787 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.045 [2024-11-29 13:06:53.509506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82205 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82205 /var/tmp/bdevperf.sock 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82205 ']' 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.045 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:22.315 [2024-11-29 13:06:53.570762] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:20:22.315 [2024-11-29 13:06:53.570843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82205 ] 00:20:22.315 [2024-11-29 13:06:53.719149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.315 [2024-11-29 13:06:53.790405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.588 [2024-11-29 13:06:53.854362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:22.588 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.588 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:22.588 13:06:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:22.846 13:06:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:23.103 NVMe0n1 00:20:23.103 13:06:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82221 00:20:23.103 13:06:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.103 13:06:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:23.361 Running I/O for 10 seconds... 00:20:24.298 13:06:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:24.561 7594.00 IOPS, 29.66 MiB/s [2024-11-29T13:06:56.076Z] [2024-11-29 13:06:55.825128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.561 [2024-11-29 13:06:55.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.561 [2024-11-29 13:06:55.825773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.561 [2024-11-29 13:06:55.825787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.825889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.825920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.825941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.825960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.825979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.825989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.825998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.826027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.826055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.826074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.562 [2024-11-29 13:06:55.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.826456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.562 [2024-11-29 13:06:55.826466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.562 [2024-11-29 13:06:55.826475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.563 [2024-11-29 13:06:55.826616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.826995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.563 [2024-11-29 13:06:55.827175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.563 [2024-11-29 13:06:55.827184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.564 [2024-11-29 13:06:55.827954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.564 [2024-11-29 13:06:55.827974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.564 [2024-11-29 13:06:55.827984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.565 [2024-11-29 13:06:55.828138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19970 is same with the state(6) to be set 00:20:24.565 [2024-11-29 13:06:55.828160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:24.565 [2024-11-29 13:06:55.828168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:24.565 [2024-11-29 13:06:55.828176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72968 len:8 PRP1 0x0 PRP2 0x0 00:20:24.565 [2024-11-29 13:06:55.828184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.565 [2024-11-29 13:06:55.828507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:24.565 [2024-11-29 13:06:55.828624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9e50 (9): Bad file descriptor 00:20:24.565 [2024-11-29 13:06:55.828757] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.565 [2024-11-29 13:06:55.828795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb9e50 with addr=10.0.0.3, port=4420 00:20:24.565 [2024-11-29 13:06:55.828812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9e50 is same with the state(6) to be set 00:20:24.565 [2024-11-29 13:06:55.828836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9e50 (9): Bad file descriptor 00:20:24.565 [2024-11-29 13:06:55.828857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:24.565 [2024-11-29 13:06:55.828871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:24.565 [2024-11-29 13:06:55.828920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:24.565 [2024-11-29 13:06:55.828939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:24.565 [2024-11-29 13:06:55.828955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:24.565 13:06:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:26.437 4521.00 IOPS, 17.66 MiB/s [2024-11-29T13:06:57.952Z] 3014.00 IOPS, 11.77 MiB/s [2024-11-29T13:06:57.952Z] [2024-11-29 13:06:57.829104] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.437 [2024-11-29 13:06:57.829176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb9e50 with addr=10.0.0.3, port=4420 00:20:26.437 [2024-11-29 13:06:57.829200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9e50 is same with the state(6) to be set 00:20:26.437 [2024-11-29 13:06:57.829225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9e50 (9): Bad file descriptor 00:20:26.437 [2024-11-29 13:06:57.829256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:26.437 [2024-11-29 13:06:57.829268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:26.437 [2024-11-29 13:06:57.829279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:26.437 [2024-11-29 13:06:57.829290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:26.437 [2024-11-29 13:06:57.829302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:26.437 13:06:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:26.437 13:06:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.437 13:06:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:26.696 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:26.696 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:26.696 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:26.696 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:26.955 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:26.955 13:06:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:28.593 2260.50 IOPS, 8.83 MiB/s [2024-11-29T13:07:00.108Z] 1808.40 IOPS, 7.06 MiB/s [2024-11-29T13:07:00.108Z] [2024-11-29 13:06:59.829519] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.593 [2024-11-29 13:06:59.829640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb9e50 with addr=10.0.0.3, port=4420 00:20:28.593 [2024-11-29 13:06:59.829657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9e50 is same with the state(6) to be set 00:20:28.593 [2024-11-29 13:06:59.829700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9e50 (9): Bad file descriptor 00:20:28.593 [2024-11-29 13:06:59.829721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:28.593 [2024-11-29 13:06:59.829732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:28.593 [2024-11-29 13:06:59.829744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:28.593 [2024-11-29 13:06:59.829757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:28.593 [2024-11-29 13:06:59.829770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:30.541 1507.00 IOPS, 5.89 MiB/s [2024-11-29T13:07:02.056Z] 1291.71 IOPS, 5.05 MiB/s [2024-11-29T13:07:02.056Z] [2024-11-29 13:07:01.829844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:30.541 [2024-11-29 13:07:01.829916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:30.541 [2024-11-29 13:07:01.829947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:30.541 [2024-11-29 13:07:01.829957] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:30.541 [2024-11-29 13:07:01.829970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:31.477 1130.25 IOPS, 4.42 MiB/s 00:20:31.477 Latency(us) 00:20:31.477 [2024-11-29T13:07:02.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.477 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.477 Verification LBA range: start 0x0 length 0x4000 00:20:31.477 NVMe0n1 : 8.16 1107.49 4.33 15.68 0.00 113792.64 3559.80 7015926.69 00:20:31.477 [2024-11-29T13:07:02.992Z] =================================================================================================================== 00:20:31.477 [2024-11-29T13:07:02.992Z] Total : 1107.49 4.33 15.68 0.00 113792.64 3559.80 7015926.69 00:20:31.477 { 00:20:31.477 "results": [ 00:20:31.477 { 00:20:31.477 "job": "NVMe0n1", 00:20:31.477 "core_mask": "0x4", 00:20:31.477 "workload": "verify", 00:20:31.477 "status": "finished", 00:20:31.477 "verify_range": { 00:20:31.477 "start": 0, 00:20:31.477 "length": 16384 00:20:31.477 }, 00:20:31.477 "queue_depth": 128, 00:20:31.477 "io_size": 4096, 00:20:31.477 "runtime": 8.164375, 00:20:31.477 "iops": 1107.4944499732067, 00:20:31.477 "mibps": 4.3261501952078385, 00:20:31.477 "io_failed": 128, 00:20:31.477 "io_timeout": 0, 00:20:31.477 "avg_latency_us": 113792.6429059185, 00:20:31.477 "min_latency_us": 3559.796363636364, 00:20:31.477 "max_latency_us": 7015926.69090909 00:20:31.477 } 00:20:31.477 ], 00:20:31.477 "core_count": 1 00:20:31.477 } 00:20:32.045 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:32.046 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:32.046 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:32.305 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:32.305 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:32.305 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:32.305 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82221 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82205 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82205 ']' 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82205 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82205 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:32.564 killing process with pid 82205 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82205' 00:20:32.564 Received shutdown signal, test time was about 9.282982 seconds 00:20:32.564 00:20:32.564 Latency(us) 00:20:32.564 [2024-11-29T13:07:04.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.564 [2024-11-29T13:07:04.079Z] =================================================================================================================== 00:20:32.564 [2024-11-29T13:07:04.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82205 00:20:32.564 13:07:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82205 00:20:32.824 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:33.083 [2024-11-29 13:07:04.492891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82338 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82338 /var/tmp/bdevperf.sock 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82338 ']' 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.083 13:07:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:33.083 [2024-11-29 13:07:04.572018] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:20:33.083 [2024-11-29 13:07:04.572110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82338 ] 00:20:33.343 [2024-11-29 13:07:04.717241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.343 [2024-11-29 13:07:04.767837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.343 [2024-11-29 13:07:04.842739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.281 13:07:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.281 13:07:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:34.281 13:07:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:34.281 13:07:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:34.849 NVMe0n1 00:20:34.849 13:07:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82362 00:20:34.849 13:07:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.849 13:07:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:34.849 Running I/O for 10 seconds... 00:20:35.786 13:07:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:36.048 7334.00 IOPS, 28.65 MiB/s [2024-11-29T13:07:07.563Z] [2024-11-29 13:07:07.370618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa490a0 is same with the state(6) to be set 00:20:36.048 [2024-11-29 13:07:07.370920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.048 [2024-11-29 13:07:07.371204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.048 [2024-11-29 13:07:07.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.048 [2024-11-29 13:07:07.371423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.371681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.371992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.049 [2024-11-29 13:07:07.372126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.372144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.049 [2024-11-29 13:07:07.372154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.049 [2024-11-29 13:07:07.372162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.050 [2024-11-29 13:07:07.372563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.050 [2024-11-29 13:07:07.372888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.050 [2024-11-29 13:07:07.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.051 [2024-11-29 13:07:07.372906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.372916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.051 [2024-11-29 13:07:07.372925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.372935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.051 [2024-11-29 13:07:07.372942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.372952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.372984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.372992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.373010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.373045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.373062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.051 [2024-11-29 13:07:07.373080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131970 is same with the state(6) to be set 00:20:36.051 [2024-11-29 13:07:07.373099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65576 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66000 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66008 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66016 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66032 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66040 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66048 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66056 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66080 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66088 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66096 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66104 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66112 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.051 [2024-11-29 13:07:07.373585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.051 [2024-11-29 13:07:07.373592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.051 [2024-11-29 13:07:07.373599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66120 len:8 PRP1 0x0 PRP2 0x0 00:20:36.051 [2024-11-29 13:07:07.373606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.373614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.052 [2024-11-29 13:07:07.373620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.052 [2024-11-29 13:07:07.373627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66128 len:8 PRP1 0x0 PRP2 0x0 00:20:36.052 [2024-11-29 13:07:07.373634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.373642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.052 [2024-11-29 13:07:07.373648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.052 [2024-11-29 13:07:07.373654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 PRP1 0x0 PRP2 0x0 00:20:36.052 [2024-11-29 13:07:07.387450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.387631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.052 [2024-11-29 13:07:07.387650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.387660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.052 [2024-11-29 13:07:07.387668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.387677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.052 [2024-11-29 13:07:07.387685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.387694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.052 [2024-11-29 13:07:07.387702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.052 [2024-11-29 13:07:07.387710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:36.052 [2024-11-29 13:07:07.387938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:36.052 [2024-11-29 13:07:07.387963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:36.052 [2024-11-29 13:07:07.388046] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.052 [2024-11-29 13:07:07.388075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:36.052 [2024-11-29 13:07:07.388085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:36.052 [2024-11-29 13:07:07.388103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:36.052 [2024-11-29 13:07:07.388117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:36.052 [2024-11-29 13:07:07.388125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:36.052 [2024-11-29 13:07:07.388135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:36.052 [2024-11-29 13:07:07.388144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:36.052 [2024-11-29 13:07:07.388153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:36.052 13:07:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:36.991 4070.00 IOPS, 15.90 MiB/s [2024-11-29T13:07:08.506Z] [2024-11-29 13:07:08.388232] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.991 [2024-11-29 13:07:08.388301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:36.991 [2024-11-29 13:07:08.388314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:36.991 [2024-11-29 13:07:08.388332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:36.991 [2024-11-29 13:07:08.388346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:36.991 [2024-11-29 13:07:08.388355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:36.991 [2024-11-29 13:07:08.388363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:36.991 [2024-11-29 13:07:08.388372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:36.991 [2024-11-29 13:07:08.388381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:36.991 13:07:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:37.270 [2024-11-29 13:07:08.641131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:37.270 13:07:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82362 00:20:38.101 2713.33 IOPS, 10.60 MiB/s [2024-11-29T13:07:09.616Z] [2024-11-29 13:07:09.401966] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:39.974 2035.00 IOPS, 7.95 MiB/s [2024-11-29T13:07:12.425Z] 3245.00 IOPS, 12.68 MiB/s [2024-11-29T13:07:13.362Z] 4290.83 IOPS, 16.76 MiB/s [2024-11-29T13:07:14.297Z] 4760.14 IOPS, 18.59 MiB/s [2024-11-29T13:07:15.673Z] 5332.00 IOPS, 20.83 MiB/s [2024-11-29T13:07:16.611Z] 5776.00 IOPS, 22.56 MiB/s [2024-11-29T13:07:16.611Z] 6120.00 IOPS, 23.91 MiB/s 00:20:45.096 Latency(us) 00:20:45.096 [2024-11-29T13:07:16.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.096 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.096 Verification LBA range: start 0x0 length 0x4000 00:20:45.096 NVMe0n1 : 10.01 6125.54 23.93 0.00 0.00 20863.42 1593.72 3035150.89 00:20:45.096 [2024-11-29T13:07:16.611Z] =================================================================================================================== 00:20:45.096 [2024-11-29T13:07:16.611Z] Total : 6125.54 23.93 0.00 0.00 20863.42 1593.72 3035150.89 00:20:45.096 { 00:20:45.096 "results": [ 00:20:45.096 { 00:20:45.096 "job": "NVMe0n1", 00:20:45.096 "core_mask": "0x4", 00:20:45.096 "workload": "verify", 00:20:45.096 "status": "finished", 00:20:45.096 "verify_range": { 00:20:45.096 "start": 0, 00:20:45.096 "length": 16384 00:20:45.096 }, 00:20:45.096 "queue_depth": 128, 00:20:45.096 "io_size": 4096, 00:20:45.096 "runtime": 10.011858, 00:20:45.096 "iops": 6125.5363390092025, 00:20:45.096 "mibps": 23.927876324254697, 00:20:45.096 "io_failed": 0, 00:20:45.096 "io_timeout": 0, 00:20:45.096 "avg_latency_us": 20863.41614780732, 00:20:45.096 "min_latency_us": 1593.7163636363637, 00:20:45.096 "max_latency_us": 3035150.8945454545 00:20:45.096 } 00:20:45.096 ], 00:20:45.096 "core_count": 1 00:20:45.096 } 00:20:45.096 13:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82472 00:20:45.096 13:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.096 13:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:45.096 Running I/O for 10 seconds... 00:20:46.033 13:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:46.295 8540.00 IOPS, 33.36 MiB/s [2024-11-29T13:07:17.810Z] [2024-11-29 13:07:17.608383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(6) to be set 00:20:46.295 [2024-11-29 13:07:17.608504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(6) to be set 00:20:46.295 [2024-11-29 13:07:17.608519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(6) to be set 00:20:46.295 [2024-11-29 13:07:17.608778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.295 [2024-11-29 13:07:17.608816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.295 [2024-11-29 13:07:17.608848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.295 [2024-11-29 13:07:17.608871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.295 [2024-11-29 13:07:17.608892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.608914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.608936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.608957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.608977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.608988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.295 [2024-11-29 13:07:17.609302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.295 [2024-11-29 13:07:17.609321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.295 [2024-11-29 13:07:17.609331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.296 [2024-11-29 13:07:17.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.609960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.609980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.296 [2024-11-29 13:07:17.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.296 [2024-11-29 13:07:17.610174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.610982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.610991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.611002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.297 [2024-11-29 13:07:17.611010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.611021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.611030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.297 [2024-11-29 13:07:17.611041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.297 [2024-11-29 13:07:17.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.298 [2024-11-29 13:07:17.611072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.298 [2024-11-29 13:07:17.611091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.298 [2024-11-29 13:07:17.611111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.298 [2024-11-29 13:07:17.611132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.298 [2024-11-29 13:07:17.611152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ffd0 is same with the state(6) to be set 00:20:46.298 [2024-11-29 13:07:17.611175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.298 [2024-11-29 13:07:17.611857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.298 [2024-11-29 13:07:17.611865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:20:46.298 [2024-11-29 13:07:17.611873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.298 [2024-11-29 13:07:17.611883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:46.299 [2024-11-29 13:07:17.611895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:46.299 [2024-11-29 13:07:17.611902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:20:46.299 [2024-11-29 13:07:17.611912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.299 [2024-11-29 13:07:17.612044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.299 [2024-11-29 13:07:17.612069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.299 [2024-11-29 13:07:17.612081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.299 [2024-11-29 13:07:17.612096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.299 [2024-11-29 13:07:17.612107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.299 [2024-11-29 13:07:17.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.299 13:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:46.299 [2024-11-29 13:07:17.625432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.299 [2024-11-29 13:07:17.625466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.299 [2024-11-29 13:07:17.625480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:46.299 [2024-11-29 13:07:17.625835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:46.299 [2024-11-29 13:07:17.625869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:46.299 [2024-11-29 13:07:17.626068] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.299 [2024-11-29 13:07:17.626100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:46.299 [2024-11-29 13:07:17.626116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:46.299 [2024-11-29 13:07:17.626140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:46.299 [2024-11-29 13:07:17.626162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:46.299 [2024-11-29 13:07:17.626174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:46.299 [2024-11-29 13:07:17.626189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:46.299 [2024-11-29 13:07:17.626203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:46.299 [2024-11-29 13:07:17.626217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:47.233 5031.00 IOPS, 19.65 MiB/s [2024-11-29T13:07:18.748Z] [2024-11-29 13:07:18.626383] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.233 [2024-11-29 13:07:18.626458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:47.233 [2024-11-29 13:07:18.626475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:47.233 [2024-11-29 13:07:18.626514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:47.233 [2024-11-29 13:07:18.626532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:47.233 [2024-11-29 13:07:18.626542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:47.233 [2024-11-29 13:07:18.626553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:47.233 [2024-11-29 13:07:18.626564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:47.233 [2024-11-29 13:07:18.626575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:48.169 3354.00 IOPS, 13.10 MiB/s [2024-11-29T13:07:19.684Z] [2024-11-29 13:07:19.626740] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.169 [2024-11-29 13:07:19.626813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:48.169 [2024-11-29 13:07:19.626830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:48.169 [2024-11-29 13:07:19.626858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:48.169 [2024-11-29 13:07:19.626895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:48.169 [2024-11-29 13:07:19.626909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:48.169 [2024-11-29 13:07:19.626931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:48.169 [2024-11-29 13:07:19.626945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:48.169 [2024-11-29 13:07:19.626957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:49.363 2515.50 IOPS, 9.83 MiB/s [2024-11-29T13:07:20.878Z] 13:07:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:49.363 [2024-11-29 13:07:20.630797] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:49.363 [2024-11-29 13:07:20.630859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1e50 with addr=10.0.0.3, port=4420 00:20:49.363 [2024-11-29 13:07:20.630875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1e50 is same with the state(6) to be set 00:20:49.363 [2024-11-29 13:07:20.631152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1e50 (9): Bad file descriptor 00:20:49.363 [2024-11-29 13:07:20.631429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:49.363 [2024-11-29 13:07:20.631442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:49.363 [2024-11-29 13:07:20.631453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:49.363 [2024-11-29 13:07:20.631464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:49.363 [2024-11-29 13:07:20.631475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:49.622 [2024-11-29 13:07:20.935590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.622 13:07:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82472 00:20:50.190 2012.40 IOPS, 7.86 MiB/s [2024-11-29T13:07:21.705Z] [2024-11-29 13:07:21.657655] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:52.063 2960.00 IOPS, 11.56 MiB/s [2024-11-29T13:07:24.513Z] 4029.14 IOPS, 15.74 MiB/s [2024-11-29T13:07:25.888Z] 4846.25 IOPS, 18.93 MiB/s [2024-11-29T13:07:26.456Z] 5461.44 IOPS, 21.33 MiB/s [2024-11-29T13:07:26.716Z] 5964.90 IOPS, 23.30 MiB/s 00:20:55.201 Latency(us) 00:20:55.201 [2024-11-29T13:07:26.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.201 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.201 Verification LBA range: start 0x0 length 0x4000 00:20:55.201 NVMe0n1 : 10.01 5969.27 23.32 3963.60 0.00 12861.43 726.11 3035150.89 00:20:55.201 [2024-11-29T13:07:26.716Z] =================================================================================================================== 00:20:55.201 [2024-11-29T13:07:26.716Z] Total : 5969.27 23.32 3963.60 0.00 12861.43 0.00 3035150.89 00:20:55.201 { 00:20:55.201 "results": [ 00:20:55.201 { 00:20:55.201 "job": "NVMe0n1", 00:20:55.201 "core_mask": "0x4", 00:20:55.201 "workload": "verify", 00:20:55.201 "status": "finished", 00:20:55.201 "verify_range": { 00:20:55.201 "start": 0, 00:20:55.201 "length": 16384 00:20:55.201 }, 00:20:55.201 "queue_depth": 128, 00:20:55.201 "io_size": 4096, 00:20:55.201 "runtime": 10.008586, 00:20:55.201 "iops": 5969.274780673314, 00:20:55.201 "mibps": 23.317479612005133, 00:20:55.201 "io_failed": 39670, 00:20:55.201 "io_timeout": 0, 00:20:55.201 "avg_latency_us": 12861.433875967716, 00:20:55.201 "min_latency_us": 726.1090909090909, 00:20:55.201 "max_latency_us": 3035150.8945454545 00:20:55.201 } 00:20:55.201 ], 00:20:55.201 "core_count": 1 00:20:55.201 } 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82338 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82338 ']' 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82338 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82338 00:20:55.201 killing process with pid 82338 00:20:55.201 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.201 00:20:55.201 Latency(us) 00:20:55.201 [2024-11-29T13:07:26.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.201 [2024-11-29T13:07:26.716Z] =================================================================================================================== 00:20:55.201 [2024-11-29T13:07:26.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82338' 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82338 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82338 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82581 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82581 /var/tmp/bdevperf.sock 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82581 ']' 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.201 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.460 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.460 13:07:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:55.460 [2024-11-29 13:07:26.756397] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:20:55.460 [2024-11-29 13:07:26.756491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82581 ] 00:20:55.460 [2024-11-29 13:07:26.900368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.460 [2024-11-29 13:07:26.945346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.719 [2024-11-29 13:07:27.002003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:55.719 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.719 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:55.719 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82588 00:20:55.719 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82581 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:55.719 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:55.979 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:56.238 NVMe0n1 00:20:56.238 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82631 00:20:56.238 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.238 13:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:56.497 Running I/O for 10 seconds... 00:20:57.432 13:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:57.696 16256.00 IOPS, 63.50 MiB/s [2024-11-29T13:07:29.212Z] [2024-11-29 13:07:28.969589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.697 [2024-11-29 13:07:28.969687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.697 [2024-11-29 13:07:28.969703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with t[2024-11-29 13:07:28.969711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:20:57.697 id:0 cdw10:00000000 cdw11:00000000 00:20:57.697 [2024-11-29 13:07:28.969721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with t[2024-11-29 13:07:28.969722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:20:57.697 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.697 [2024-11-29 13:07:28.969730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.697 [2024-11-29 13:07:28.969739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.697 [2024-11-29 13:07:28.969747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.697 [2024-11-29 13:07:28.969755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.697 [2024-11-29 13:07:28.969763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2030e50 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.969998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.697 [2024-11-29 13:07:28.970286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dac0 is same with the state(6) to be set 00:20:57.698 [2024-11-29 13:07:28.970786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.970989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.970998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.971009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.971018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.971029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.971038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.971049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.698 [2024-11-29 13:07:28.971058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.698 [2024-11-29 13:07:28.971068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.699 [2024-11-29 13:07:28.971855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.699 [2024-11-29 13:07:28.971864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.971985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.700 [2024-11-29 13:07:28.972647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.700 [2024-11-29 13:07:28.972655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.972989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.972999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.701 [2024-11-29 13:07:28.973348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.701 [2024-11-29 13:07:28.973358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.702 [2024-11-29 13:07:28.973366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.702 [2024-11-29 13:07:28.973377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.702 [2024-11-29 13:07:28.973386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.702 [2024-11-29 13:07:28.973396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.702 [2024-11-29 13:07:28.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.702 [2024-11-29 13:07:28.973415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.702 [2024-11-29 13:07:28.973424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.702 [2024-11-29 13:07:28.973433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209de20 is same with the state(6) to be set 00:20:57.702 [2024-11-29 13:07:28.973444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.702 [2024-11-29 13:07:28.973452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.702 [2024-11-29 13:07:28.973459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25512 len:8 PRP1 0x0 PRP2 0x0 00:20:57.702 [2024-11-29 13:07:28.973468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.702 [2024-11-29 13:07:28.973815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:57.702 [2024-11-29 13:07:28.973870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030e50 (9): Bad file descriptor 00:20:57.702 [2024-11-29 13:07:28.974009] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.702 [2024-11-29 13:07:28.974033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2030e50 with addr=10.0.0.3, port=4420 00:20:57.702 [2024-11-29 13:07:28.974051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2030e50 is same with the state(6) to be set 00:20:57.702 [2024-11-29 13:07:28.974069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030e50 (9): Bad file descriptor 00:20:57.702 [2024-11-29 13:07:28.974086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:57.702 [2024-11-29 13:07:28.974095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:57.702 [2024-11-29 13:07:28.974117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:57.702 [2024-11-29 13:07:28.974127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:57.702 [2024-11-29 13:07:28.974137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:57.702 13:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82631 00:20:59.576 9208.50 IOPS, 35.97 MiB/s [2024-11-29T13:07:31.091Z] 6139.00 IOPS, 23.98 MiB/s [2024-11-29T13:07:31.091Z] [2024-11-29 13:07:30.974318] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.576 [2024-11-29 13:07:30.974385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2030e50 with addr=10.0.0.3, port=4420 00:20:59.576 [2024-11-29 13:07:30.974402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2030e50 is same with the state(6) to be set 00:20:59.576 [2024-11-29 13:07:30.974426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030e50 (9): Bad file descriptor 00:20:59.576 [2024-11-29 13:07:30.974459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:59.576 [2024-11-29 13:07:30.974471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:59.576 [2024-11-29 13:07:30.974482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:59.576 [2024-11-29 13:07:30.974493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:59.576 [2024-11-29 13:07:30.974504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:01.449 4604.25 IOPS, 17.99 MiB/s [2024-11-29T13:07:33.247Z] 3683.40 IOPS, 14.39 MiB/s [2024-11-29T13:07:33.247Z] [2024-11-29 13:07:32.974721] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.732 [2024-11-29 13:07:32.974797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2030e50 with addr=10.0.0.3, port=4420 00:21:01.732 [2024-11-29 13:07:32.974813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2030e50 is same with the state(6) to be set 00:21:01.732 [2024-11-29 13:07:32.974837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030e50 (9): Bad file descriptor 00:21:01.732 [2024-11-29 13:07:32.974857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:01.732 [2024-11-29 13:07:32.974866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:01.732 [2024-11-29 13:07:32.974879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:01.732 [2024-11-29 13:07:32.974889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:01.732 [2024-11-29 13:07:32.974911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:03.607 3069.50 IOPS, 11.99 MiB/s [2024-11-29T13:07:35.122Z] 2631.00 IOPS, 10.28 MiB/s [2024-11-29T13:07:35.122Z] [2024-11-29 13:07:34.975055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:03.607 [2024-11-29 13:07:34.975298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:03.607 [2024-11-29 13:07:34.975319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:03.607 [2024-11-29 13:07:34.975333] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:03.607 [2024-11-29 13:07:34.975348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:04.541 2302.12 IOPS, 8.99 MiB/s 00:21:04.541 Latency(us) 00:21:04.541 [2024-11-29T13:07:36.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.541 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:04.541 NVMe0n1 : 8.14 2261.53 8.83 15.72 0.00 56119.25 7238.75 7015926.69 00:21:04.541 [2024-11-29T13:07:36.056Z] =================================================================================================================== 00:21:04.541 [2024-11-29T13:07:36.056Z] Total : 2261.53 8.83 15.72 0.00 56119.25 7238.75 7015926.69 00:21:04.541 { 00:21:04.541 "results": [ 00:21:04.541 { 00:21:04.541 "job": "NVMe0n1", 00:21:04.541 "core_mask": "0x4", 00:21:04.541 "workload": "randread", 00:21:04.541 "status": "finished", 00:21:04.541 "queue_depth": 128, 00:21:04.541 "io_size": 4096, 00:21:04.541 "runtime": 8.143603, 00:21:04.541 "iops": 2261.529693920492, 00:21:04.541 "mibps": 8.834100366876921, 00:21:04.541 "io_failed": 128, 00:21:04.541 "io_timeout": 0, 00:21:04.541 "avg_latency_us": 56119.2467062428, 00:21:04.541 "min_latency_us": 7238.749090909091, 00:21:04.541 "max_latency_us": 7015926.69090909 00:21:04.541 } 00:21:04.541 ], 00:21:04.541 "core_count": 1 00:21:04.541 } 00:21:04.541 13:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:04.541 Attaching 5 probes... 00:21:04.541 1376.392413: reset bdev controller NVMe0 00:21:04.541 1376.525223: reconnect bdev controller NVMe0 00:21:04.541 3376.780617: reconnect delay bdev controller NVMe0 00:21:04.541 3376.816500: reconnect bdev controller NVMe0 00:21:04.541 5377.180313: reconnect delay bdev controller NVMe0 00:21:04.541 5377.199635: reconnect bdev controller NVMe0 00:21:04.541 7377.618260: reconnect delay bdev controller NVMe0 00:21:04.541 7377.639294: reconnect bdev controller NVMe0 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82588 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82581 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82581 ']' 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82581 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82581 00:21:04.541 killing process with pid 82581 00:21:04.541 Received shutdown signal, test time was about 8.212710 seconds 00:21:04.541 00:21:04.541 Latency(us) 00:21:04.541 [2024-11-29T13:07:36.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.541 [2024-11-29T13:07:36.056Z] =================================================================================================================== 00:21:04.541 [2024-11-29T13:07:36.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82581' 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82581 00:21:04.541 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82581 00:21:04.801 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:05.059 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.060 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.060 rmmod nvme_tcp 00:21:05.060 rmmod nvme_fabrics 00:21:05.318 rmmod nvme_keyring 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82150 ']' 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82150 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82150 ']' 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82150 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82150 00:21:05.318 killing process with pid 82150 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82150' 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82150 00:21:05.318 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82150 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:05.578 13:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:05.578 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:05.579 00:21:05.579 real 0m46.680s 00:21:05.579 user 2m16.182s 00:21:05.579 sys 0m5.741s 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.579 ************************************ 00:21:05.579 13:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:05.579 END TEST nvmf_timeout 00:21:05.579 ************************************ 00:21:05.838 13:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:05.838 13:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:05.838 00:21:05.838 real 5m11.544s 00:21:05.838 user 13m29.474s 00:21:05.838 sys 1m12.703s 00:21:05.838 13:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.838 13:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.838 ************************************ 00:21:05.838 END TEST nvmf_host 00:21:05.838 ************************************ 00:21:05.838 13:07:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:05.838 13:07:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:05.838 00:21:05.838 real 13m0.990s 00:21:05.838 user 31m14.999s 00:21:05.838 sys 3m15.448s 00:21:05.838 13:07:37 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.838 13:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:05.838 ************************************ 00:21:05.838 END TEST nvmf_tcp 00:21:05.838 ************************************ 00:21:05.838 13:07:37 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:05.838 13:07:37 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:05.838 13:07:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.838 13:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.838 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:21:05.838 ************************************ 00:21:05.838 START TEST nvmf_dif 00:21:05.838 ************************************ 00:21:05.838 13:07:37 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:05.838 * Looking for test storage... 00:21:05.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:05.838 13:07:37 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.838 13:07:37 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.838 13:07:37 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:06.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.098 --rc genhtml_branch_coverage=1 00:21:06.098 --rc genhtml_function_coverage=1 00:21:06.098 --rc genhtml_legend=1 00:21:06.098 --rc geninfo_all_blocks=1 00:21:06.098 --rc geninfo_unexecuted_blocks=1 00:21:06.098 00:21:06.098 ' 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:06.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.098 --rc genhtml_branch_coverage=1 00:21:06.098 --rc genhtml_function_coverage=1 00:21:06.098 --rc genhtml_legend=1 00:21:06.098 --rc geninfo_all_blocks=1 00:21:06.098 --rc geninfo_unexecuted_blocks=1 00:21:06.098 00:21:06.098 ' 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:06.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.098 --rc genhtml_branch_coverage=1 00:21:06.098 --rc genhtml_function_coverage=1 00:21:06.098 --rc genhtml_legend=1 00:21:06.098 --rc geninfo_all_blocks=1 00:21:06.098 --rc geninfo_unexecuted_blocks=1 00:21:06.098 00:21:06.098 ' 00:21:06.098 13:07:37 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:06.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.098 --rc genhtml_branch_coverage=1 00:21:06.098 --rc genhtml_function_coverage=1 00:21:06.098 --rc genhtml_legend=1 00:21:06.098 --rc geninfo_all_blocks=1 00:21:06.098 --rc geninfo_unexecuted_blocks=1 00:21:06.098 00:21:06.098 ' 00:21:06.098 13:07:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e271b7c2-49c4-4f9a-9b27-9cb25d329b31 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.098 13:07:37 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.098 13:07:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.098 13:07:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.098 13:07:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.098 13:07:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:06.098 13:07:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:06.098 13:07:37 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.099 13:07:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:06.099 13:07:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:06.099 13:07:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:06.099 13:07:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:06.099 13:07:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.099 13:07:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:06.099 13:07:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:06.099 Cannot find device "nvmf_init_br" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:06.099 Cannot find device "nvmf_init_br2" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:06.099 Cannot find device "nvmf_tgt_br" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.099 Cannot find device "nvmf_tgt_br2" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:06.099 Cannot find device "nvmf_init_br" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:06.099 Cannot find device "nvmf_init_br2" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:06.099 Cannot find device "nvmf_tgt_br" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:06.099 Cannot find device "nvmf_tgt_br2" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:06.099 Cannot find device "nvmf_br" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:06.099 Cannot find device "nvmf_init_if" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:06.099 Cannot find device "nvmf_init_if2" 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.099 13:07:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:06.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:06.358 00:21:06.358 --- 10.0.0.3 ping statistics --- 00:21:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.358 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:06.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:06.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:21:06.358 00:21:06.358 --- 10.0.0.4 ping statistics --- 00:21:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.358 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:06.358 00:21:06.358 --- 10.0.0.1 ping statistics --- 00:21:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.358 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:06.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:21:06.358 00:21:06.358 --- 10.0.0.2 ping statistics --- 00:21:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.358 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:06.358 13:07:37 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:06.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:06.876 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.876 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.876 13:07:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:06.876 13:07:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.876 13:07:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.876 13:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83122 00:21:06.876 13:07:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.877 13:07:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83122 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83122 ']' 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.877 13:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:06.877 [2024-11-29 13:07:38.287330] Starting SPDK v25.01-pre git sha1 89b293437 / DPDK 24.03.0 initialization... 00:21:06.877 [2024-11-29 13:07:38.287444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.136 [2024-11-29 13:07:38.441537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.136 [2024-11-29 13:07:38.495764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.136 [2024-11-29 13:07:38.495833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.136 [2024-11-29 13:07:38.495853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.136 [2024-11-29 13:07:38.495864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.136 [2024-11-29 13:07:38.495873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.136 [2024-11-29 13:07:38.496346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.136 [2024-11-29 13:07:38.552263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:07.136 13:07:38 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.136 13:07:38 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:07.136 13:07:38 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.136 13:07:38 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.136 13:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 13:07:38 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.408 13:07:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:07.408 13:07:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 [2024-11-29 13:07:38.670789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.408 13:07:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 ************************************ 00:21:07.408 START TEST fio_dif_1_default 00:21:07.408 ************************************ 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 bdev_null0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.408 [2024-11-29 13:07:38.714976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.408 { 00:21:07.408 "params": { 00:21:07.408 "name": "Nvme$subsystem", 00:21:07.408 "trtype": "$TEST_TRANSPORT", 00:21:07.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.408 "adrfam": "ipv4", 00:21:07.408 "trsvcid": "$NVMF_PORT", 00:21:07.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.408 "hdgst": ${hdgst:-false}, 00:21:07.408 "ddgst": ${ddgst:-false} 00:21:07.408 }, 00:21:07.408 "method": "bdev_nvme_attach_controller" 00:21:07.408 } 00:21:07.408 EOF 00:21:07.408 )") 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:07.408 "params": { 00:21:07.408 "name": "Nvme0", 00:21:07.408 "trtype": "tcp", 00:21:07.408 "traddr": "10.0.0.3", 00:21:07.408 "adrfam": "ipv4", 00:21:07.408 "trsvcid": "4420", 00:21:07.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.408 "hdgst": false, 00:21:07.408 "ddgst": false 00:21:07.408 }, 00:21:07.408 "method": "bdev_nvme_attach_controller" 00:21:07.408 }' 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.408 13:07:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.681 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:07.681 fio-3.35 00:21:07.681 Starting 1 thread 00:21:19.887 00:21:19.887 filename0: (groupid=0, jobs=1): err= 0: pid=83181: Fri Nov 29 13:07:49 2024 00:21:19.887 read: IOPS=9366, BW=36.6MiB/s (38.4MB/s)(366MiB/10001msec) 00:21:19.887 slat (usec): min=5, max=412, avg= 7.58, stdev= 4.87 00:21:19.887 clat (usec): min=319, max=4707, avg=404.75, stdev=66.89 00:21:19.887 lat (usec): min=325, max=4748, avg=412.33, stdev=67.51 00:21:19.887 clat percentiles (usec): 00:21:19.887 | 1.00th=[ 326], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:21:19.887 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:21:19.887 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 478], 95.00th=[ 529], 00:21:19.887 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 979], 99.95th=[ 1045], 00:21:19.887 | 99.99th=[ 1893] 00:21:19.887 bw ( KiB/s): min=32479, max=40128, per=99.78%, avg=37386.05, stdev=1887.15, samples=19 00:21:19.887 iops : min= 8119, max=10032, avg=9346.47, stdev=471.90, samples=19 00:21:19.887 lat (usec) : 500=92.49%, 750=7.34%, 1000=0.09% 00:21:19.887 lat (msec) : 2=0.08%, 10=0.01% 00:21:19.887 cpu : usr=83.04%, sys=14.55%, ctx=86, majf=0, minf=9 00:21:19.887 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.887 issued rwts: total=93676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.887 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:19.887 00:21:19.887 Run status group 0 (all jobs): 00:21:19.887 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=366MiB (384MB), run=10001-10001msec 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 00:21:19.887 real 0m11.052s 00:21:19.887 user 0m8.977s 00:21:19.887 sys 0m1.747s 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 ************************************ 00:21:19.887 END TEST fio_dif_1_default 00:21:19.887 ************************************ 00:21:19.887 13:07:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:19.887 13:07:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.887 13:07:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 ************************************ 00:21:19.887 START TEST fio_dif_1_multi_subsystems 00:21:19.887 ************************************ 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 bdev_null0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.887 [2024-11-29 13:07:49.815558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:19.887 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.888 bdev_null1 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:19.888 { 00:21:19.888 "params": { 00:21:19.888 "name": "Nvme$subsystem", 00:21:19.888 "trtype": "$TEST_TRANSPORT", 00:21:19.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.888 "adrfam": "ipv4", 00:21:19.888 "trsvcid": "$NVMF_PORT", 00:21:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.888 "hdgst": ${hdgst:-false}, 00:21:19.888 "ddgst": ${ddgst:-false} 00:21:19.888 }, 00:21:19.888 "method": "bdev_nvme_attach_controller" 00:21:19.888 } 00:21:19.888 EOF 00:21:19.888 )") 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:19.888 { 00:21:19.888 "params": { 00:21:19.888 "name": "Nvme$subsystem", 00:21:19.888 "trtype": "$TEST_TRANSPORT", 00:21:19.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.888 "adrfam": "ipv4", 00:21:19.888 "trsvcid": "$NVMF_PORT", 00:21:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.888 "hdgst": ${hdgst:-false}, 00:21:19.888 "ddgst": ${ddgst:-false} 00:21:19.888 }, 00:21:19.888 "method": "bdev_nvme_attach_controller" 00:21:19.888 } 00:21:19.888 EOF 00:21:19.888 )") 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:19.888 "params": { 00:21:19.888 "name": "Nvme0", 00:21:19.888 "trtype": "tcp", 00:21:19.888 "traddr": "10.0.0.3", 00:21:19.888 "adrfam": "ipv4", 00:21:19.888 "trsvcid": "4420", 00:21:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:19.888 "hdgst": false, 00:21:19.888 "ddgst": false 00:21:19.888 }, 00:21:19.888 "method": "bdev_nvme_attach_controller" 00:21:19.888 },{ 00:21:19.888 "params": { 00:21:19.888 "name": "Nvme1", 00:21:19.888 "trtype": "tcp", 00:21:19.888 "traddr": "10.0.0.3", 00:21:19.888 "adrfam": "ipv4", 00:21:19.888 "trsvcid": "4420", 00:21:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.888 "hdgst": false, 00:21:19.888 "ddgst": false 00:21:19.888 }, 00:21:19.888 "method": "bdev_nvme_attach_controller" 00:21:19.888 }' 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:19.888 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.889 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:19.889 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:19.889 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:19.889 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:19.889 13:07:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.889 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:19.889 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:19.889 fio-3.35 00:21:19.889 Starting 2 threads 00:21:29.868 00:21:29.868 filename0: (groupid=0, jobs=1): err= 0: pid=83341: Fri Nov 29 13:08:00 2024 00:21:29.868 read: IOPS=5170, BW=20.2MiB/s (21.2MB/s)(202MiB/10001msec) 00:21:29.868 slat (nsec): min=6011, max=99513, avg=12538.56, stdev=5127.87 00:21:29.868 clat (usec): min=535, max=2240, avg=739.84, stdev=86.39 00:21:29.868 lat (usec): min=543, max=2251, avg=752.38, stdev=86.93 00:21:29.868 clat percentiles (usec): 00:21:29.868 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 676], 00:21:29.868 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:21:29.868 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 906], 00:21:29.868 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1123], 99.95th=[ 1434], 00:21:29.868 | 99.99th=[ 2212] 00:21:29.868 bw ( KiB/s): min=18016, max=22848, per=50.43%, avg=20862.32, stdev=1055.16, samples=19 00:21:29.868 iops : min= 4504, max= 5712, avg=5215.58, stdev=263.79, samples=19 00:21:29.868 lat (usec) : 750=65.79%, 1000=32.95% 00:21:29.868 lat (msec) : 2=1.24%, 4=0.02% 00:21:29.868 cpu : usr=89.71%, sys=8.67%, ctx=8, majf=0, minf=0 00:21:29.868 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:29.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.868 issued rwts: total=51712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.868 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:29.868 filename1: (groupid=0, jobs=1): err= 0: pid=83342: Fri Nov 29 13:08:00 2024 00:21:29.868 read: IOPS=5171, BW=20.2MiB/s (21.2MB/s)(202MiB/10001msec) 00:21:29.868 slat (usec): min=5, max=100, avg=12.79, stdev= 5.26 00:21:29.868 clat (usec): min=382, max=2244, avg=738.17, stdev=82.90 00:21:29.868 lat (usec): min=389, max=2257, avg=750.96, stdev=83.38 00:21:29.868 clat percentiles (usec): 00:21:29.868 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 676], 00:21:29.868 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 734], 00:21:29.868 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 848], 95.00th=[ 898], 00:21:29.868 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1123], 99.95th=[ 1450], 00:21:29.868 | 99.99th=[ 2212] 00:21:29.868 bw ( KiB/s): min=18016, max=22848, per=50.43%, avg=20864.00, stdev=1054.92, samples=19 00:21:29.868 iops : min= 4504, max= 5712, avg=5216.00, stdev=263.73, samples=19 00:21:29.868 lat (usec) : 500=0.01%, 750=68.17%, 1000=30.68% 00:21:29.868 lat (msec) : 2=1.13%, 4=0.02% 00:21:29.868 cpu : usr=89.61%, sys=8.87%, ctx=13, majf=0, minf=0 00:21:29.868 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:29.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.868 issued rwts: total=51716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.868 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:29.868 00:21:29.868 Run status group 0 (all jobs): 00:21:29.868 READ: bw=40.4MiB/s (42.4MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=404MiB (424MB), run=10001-10001msec 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.868 00:21:29.868 real 0m11.172s 00:21:29.868 user 0m18.717s 00:21:29.868 sys 0m2.056s 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.868 13:08:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 ************************************ 00:21:29.868 END TEST fio_dif_1_multi_subsystems 00:21:29.868 ************************************ 00:21:29.868 13:08:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:29.868 13:08:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:29.868 13:08:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.868 13:08:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 ************************************ 00:21:29.868 START TEST fio_dif_rand_params 00:21:29.868 ************************************ 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:29.868 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 bdev_null0 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 [2024-11-29 13:08:01.041571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.869 { 00:21:29.869 "params": { 00:21:29.869 "name": "Nvme$subsystem", 00:21:29.869 "trtype": "$TEST_TRANSPORT", 00:21:29.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.869 "adrfam": "ipv4", 00:21:29.869 "trsvcid": "$NVMF_PORT", 00:21:29.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.869 "hdgst": ${hdgst:-false}, 00:21:29.869 "ddgst": ${ddgst:-false} 00:21:29.869 }, 00:21:29.869 "method": "bdev_nvme_attach_controller" 00:21:29.869 } 00:21:29.869 EOF 00:21:29.869 )") 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.869 "params": { 00:21:29.869 "name": "Nvme0", 00:21:29.869 "trtype": "tcp", 00:21:29.869 "traddr": "10.0.0.3", 00:21:29.869 "adrfam": "ipv4", 00:21:29.869 "trsvcid": "4420", 00:21:29.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:29.869 "hdgst": false, 00:21:29.869 "ddgst": false 00:21:29.869 }, 00:21:29.869 "method": "bdev_nvme_attach_controller" 00:21:29.869 }' 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:29.869 13:08:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.869 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:29.869 ... 00:21:29.869 fio-3.35 00:21:29.869 Starting 3 threads 00:21:36.439 00:21:36.439 filename0: (groupid=0, jobs=1): err= 0: pid=83499: Fri Nov 29 13:08:06 2024 00:21:36.439 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(168MiB/5004msec) 00:21:36.439 slat (nsec): min=6386, max=70479, avg=10340.66, stdev=5878.32 00:21:36.439 clat (usec): min=9678, max=31913, avg=11118.82, stdev=1942.57 00:21:36.439 lat (usec): min=9686, max=31937, avg=11129.16, stdev=1943.79 00:21:36.439 clat percentiles (usec): 00:21:36.439 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10552], 20.00th=[10683], 00:21:36.439 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:21:36.439 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11469], 95.00th=[11863], 00:21:36.439 | 99.00th=[23987], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:21:36.439 | 99.99th=[31851] 00:21:36.439 bw ( KiB/s): min=31488, max=36096, per=33.43%, avg=34552.44, stdev=1335.31, samples=9 00:21:36.439 iops : min= 246, max= 282, avg=269.89, stdev=10.47, samples=9 00:21:36.439 lat (msec) : 10=3.79%, 20=95.10%, 50=1.11% 00:21:36.439 cpu : usr=92.08%, sys=7.22%, ctx=12, majf=0, minf=0 00:21:36.439 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.439 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.439 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.439 filename0: (groupid=0, jobs=1): err= 0: pid=83500: Fri Nov 29 13:08:06 2024 00:21:36.439 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(168MiB/5002msec) 00:21:36.439 slat (nsec): min=6528, max=77011, avg=14928.64, stdev=7950.21 00:21:36.439 clat (usec): min=8484, max=31997, avg=11104.79, stdev=1983.48 00:21:36.439 lat (usec): min=8493, max=32011, avg=11119.72, stdev=1983.59 00:21:36.439 clat percentiles (usec): 00:21:36.439 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:21:36.439 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:21:36.439 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:21:36.439 | 99.00th=[15795], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:21:36.439 | 99.99th=[32113] 00:21:36.439 bw ( KiB/s): min=31488, max=36096, per=33.43%, avg=34560.00, stdev=1330.22, samples=9 00:21:36.439 iops : min= 246, max= 282, avg=270.00, stdev=10.39, samples=9 00:21:36.439 lat (msec) : 10=4.01%, 20=95.10%, 50=0.89% 00:21:36.439 cpu : usr=92.44%, sys=6.82%, ctx=4, majf=0, minf=0 00:21:36.439 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.439 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.439 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.439 filename0: (groupid=0, jobs=1): err= 0: pid=83501: Fri Nov 29 13:08:06 2024 00:21:36.439 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(168MiB/5003msec) 00:21:36.439 slat (nsec): min=6411, max=81872, avg=13377.63, stdev=7069.24 00:21:36.439 clat (usec): min=7017, max=31997, avg=11109.51, stdev=1988.87 00:21:36.439 lat (usec): min=7024, max=32009, avg=11122.89, stdev=1988.97 00:21:36.439 clat percentiles (usec): 00:21:36.439 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:21:36.439 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:21:36.439 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:21:36.439 | 99.00th=[15795], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:21:36.439 | 99.99th=[32113] 00:21:36.439 bw ( KiB/s): min=31488, max=36096, per=33.35%, avg=34474.67, stdev=1354.62, samples=9 00:21:36.439 iops : min= 246, max= 282, avg=269.33, stdev=10.58, samples=9 00:21:36.439 lat (msec) : 10=3.93%, 20=95.17%, 50=0.89% 00:21:36.439 cpu : usr=93.06%, sys=6.18%, ctx=131, majf=0, minf=0 00:21:36.439 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.440 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.440 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.440 00:21:36.440 Run status group 0 (all jobs): 00:21:36.440 READ: bw=101MiB/s (106MB/s), 33.6MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=505MiB (530MB), run=5002-5004msec 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 bdev_null0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 [2024-11-29 13:08:07.110174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 bdev_null1 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 bdev_null2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.440 { 00:21:36.440 "params": { 00:21:36.440 "name": "Nvme$subsystem", 00:21:36.440 "trtype": "$TEST_TRANSPORT", 00:21:36.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.440 "adrfam": "ipv4", 00:21:36.440 "trsvcid": "$NVMF_PORT", 00:21:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.440 "hdgst": ${hdgst:-false}, 00:21:36.440 "ddgst": ${ddgst:-false} 00:21:36.440 }, 00:21:36.440 "method": "bdev_nvme_attach_controller" 00:21:36.440 } 00:21:36.440 EOF 00:21:36.440 )") 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:36.440 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.441 { 00:21:36.441 "params": { 00:21:36.441 "name": "Nvme$subsystem", 00:21:36.441 "trtype": "$TEST_TRANSPORT", 00:21:36.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.441 "adrfam": "ipv4", 00:21:36.441 "trsvcid": "$NVMF_PORT", 00:21:36.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.441 "hdgst": ${hdgst:-false}, 00:21:36.441 "ddgst": ${ddgst:-false} 00:21:36.441 }, 00:21:36.441 "method": "bdev_nvme_attach_controller" 00:21:36.441 } 00:21:36.441 EOF 00:21:36.441 )") 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.441 { 00:21:36.441 "params": { 00:21:36.441 "name": "Nvme$subsystem", 00:21:36.441 "trtype": "$TEST_TRANSPORT", 00:21:36.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.441 "adrfam": "ipv4", 00:21:36.441 "trsvcid": "$NVMF_PORT", 00:21:36.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.441 "hdgst": ${hdgst:-false}, 00:21:36.441 "ddgst": ${ddgst:-false} 00:21:36.441 }, 00:21:36.441 "method": "bdev_nvme_attach_controller" 00:21:36.441 } 00:21:36.441 EOF 00:21:36.441 )") 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.441 "params": { 00:21:36.441 "name": "Nvme0", 00:21:36.441 "trtype": "tcp", 00:21:36.441 "traddr": "10.0.0.3", 00:21:36.441 "adrfam": "ipv4", 00:21:36.441 "trsvcid": "4420", 00:21:36.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:36.441 "hdgst": false, 00:21:36.441 "ddgst": false 00:21:36.441 }, 00:21:36.441 "method": "bdev_nvme_attach_controller" 00:21:36.441 },{ 00:21:36.441 "params": { 00:21:36.441 "name": "Nvme1", 00:21:36.441 "trtype": "tcp", 00:21:36.441 "traddr": "10.0.0.3", 00:21:36.441 "adrfam": "ipv4", 00:21:36.441 "trsvcid": "4420", 00:21:36.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.441 "hdgst": false, 00:21:36.441 "ddgst": false 00:21:36.441 }, 00:21:36.441 "method": "bdev_nvme_attach_controller" 00:21:36.441 },{ 00:21:36.441 "params": { 00:21:36.441 "name": "Nvme2", 00:21:36.441 "trtype": "tcp", 00:21:36.441 "traddr": "10.0.0.3", 00:21:36.441 "adrfam": "ipv4", 00:21:36.441 "trsvcid": "4420", 00:21:36.441 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.441 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.441 "hdgst": false, 00:21:36.441 "ddgst": false 00:21:36.441 }, 00:21:36.441 "method": "bdev_nvme_attach_controller" 00:21:36.441 }' 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:36.441 13:08:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.441 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:36.441 ... 00:21:36.441 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:36.441 ... 00:21:36.441 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:36.441 ... 00:21:36.441 fio-3.35 00:21:36.441 Starting 24 threads 00:21:48.685 fio: pid=83612, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:48.685 [2024-11-29 13:08:20.084383] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2d0e0 via correct icresp 00:21:48.685 [2024-11-29 13:08:20.084445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2d0e0 00:21:48.685 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:21:48.685 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:21:48.685 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:21:48.685 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:21:48.686 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:21:50.596 [2024-11-29 13:08:21.948543] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2c780 via correct icresp 00:21:50.596 [2024-11-29 13:08:21.948601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2c780 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:21:50.596 fio: pid=83611, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:21:50.596 [2024-11-29 13:08:21.967768] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2d680 via correct icresp 00:21:50.596 [2024-11-29 13:08:21.967812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2d680 00:21:50.596 fio: pid=83609, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:21:50.596 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:21:50.596 fio: pid=83613, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.596 [2024-11-29 13:08:21.972538] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2d860 via correct icresp 00:21:50.596 [2024-11-29 13:08:21.972581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2d860 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:21:50.596 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:21:50.596 [2024-11-29 13:08:21.980588] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2d2c0 via correct icresp 00:21:50.597 [2024-11-29 13:08:21.980627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2d2c0 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:21:50.597 fio: pid=83618, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:21:50.597 [2024-11-29 13:08:22.018441] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2da40 via correct icresp 00:21:50.597 [2024-11-29 13:08:22.018477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2da40 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:21:50.597 fio: pid=83599, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=39010304, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=593920, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=8237056, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=35196928, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=40964096, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=29671424, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=8327168, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=36044800, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=28094464, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=41537536, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=56532992, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=23220224, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=46346240, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=51179520, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=60329984, buflen=4096 00:21:50.597 fio: pid=83597, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=38658048, buflen=4096 00:21:50.597 [2024-11-29 13:08:22.035469] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c205a0 via correct icresp 00:21:50.597 [2024-11-29 13:08:22.035508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c205a0 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:21:50.597 fio: pid=83600, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:21:50.597 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:21:50.597 [2024-11-29 13:08:22.052457] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c201e0 via correct icresp 00:21:50.597 [2024-11-29 13:08:22.052492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c201e0 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:21:50.597 fio: pid=83616, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:21:50.597 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:21:50.597 [2024-11-29 13:08:22.054146] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c203c0 via correct icresp 00:21:50.597 [2024-11-29 13:08:22.054183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c203c0 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=53186560, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=34996224, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=37572608, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=19476480, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=27541504, buflen=4096 00:21:50.597 fio: pid=83608, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=44208128, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=36532224, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=38891520, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=913408, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=58646528, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=18313216, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=24666112, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=58052608, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=48078848, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=4517888, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=50819072, buflen=4096 00:21:50.597 [2024-11-29 13:08:22.056445] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20780 via correct icresp 00:21:50.597 [2024-11-29 13:08:22.056479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20780 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:21:50.597 fio: pid=83607, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:21:50.597 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.060532] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20000 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.060671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20000 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:21:50.598 fio: pid=83606, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.063467] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20960 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.063500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20960 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:21:50.598 fio: pid=83610, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.064454] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2dc20 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.064482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2dc20 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:21:50.598 fio: pid=83615, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.065234] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20b40 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.065269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20b40 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:21:50.598 fio: pid=83619, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:21:50.598 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.066074] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c21680 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.066097] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20f00 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.066130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c21680 00:21:50.598 [2024-11-29 13:08:22.066148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20f00 00:21:50.598 [2024-11-29 13:08:22.066260] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c212c0 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.066266] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c210e0 via correct icresp 00:21:50.598 [2024-11-29 13:08:22.066317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c212c0 00:21:50.598 [2024-11-29 13:08:22.066334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c210e0 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:21:50.598 fio: pid=83603, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:21:50.598 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:21:50.598 [2024-11-29 13:08:22.066653] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3c20d20 via correct icresp 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:21:50.598 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:21:50.599 fio: pid=83605, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 [2024-11-29 13:08:22.066771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3c20d20 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:21:50.599 fio: pid=83598, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=36966400, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=55111680, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=38477824, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=23859200, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=58073088, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=59396096, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=56569856, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=61960192, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=41091072, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=42139648, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=4927488, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=66584576, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=53096448, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=13066240, buflen=4096 00:21:50.599 fio: pid=83601, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=66953216, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=55177216, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:21:50.599 fio: pid=83604, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:21:50.599 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:21:50.599 [2024-11-29 13:08:22.067462] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1b2de00 via correct icresp 00:21:50.599 [2024-11-29 13:08:22.067500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b2de00 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:21:50.599 fio: pid=83602, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:21:50.599 [2024-11-29 13:08:22.068145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2c000 (9): Bad file descriptor 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:21:50.599 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:21:50.599 [2024-11-29 13:08:22.068340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93e00 (9): Bad file descriptor 00:21:50.599 [2024-11-29 13:08:22.068516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2c5a0 (9): Bad file descriptor 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:21:50.599 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:21:50.599 fio: pid=83614, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:50.599 00:21:50.599 filename0: (groupid=0, jobs=1): err= 0: pid=83596: Fri Nov 29 13:08:22 2024 00:21:50.599 read: IOPS=1987, BW=7950KiB/s (8140kB/s)(77.7MiB/10012msec) 00:21:50.599 slat (usec): min=6, max=8032, avg=19.88, stdev=236.53 00:21:50.599 clat (usec): min=375, max=23697, avg=7893.38, stdev=3814.57 00:21:50.599 lat (usec): min=382, max=23704, avg=7913.26, stdev=3821.98 00:21:50.599 clat percentiles (usec): 00:21:50.599 | 1.00th=[ 1631], 5.00th=[ 1762], 10.00th=[ 1926], 20.00th=[ 3589], 00:21:50.599 | 30.00th=[ 5932], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:21:50.599 | 70.00th=[10028], 80.00th=[11731], 90.00th=[12256], 95.00th=[14091], 00:21:50.599 | 99.00th=[15926], 99.50th=[18220], 99.90th=[20317], 99.95th=[21103], 00:21:50.599 | 99.99th=[23725] 00:21:50.599 bw ( KiB/s): min= 5488, max=13040, per=51.17%, avg=7952.80, stdev=1626.02, samples=20 00:21:50.599 iops : min= 1372, max= 3260, avg=1988.20, stdev=406.51, samples=20 00:21:50.599 lat (usec) : 500=0.02%, 750=0.08%, 1000=0.23% 00:21:50.599 lat (msec) : 2=10.31%, 4=10.51%, 10=49.07%, 20=29.43%, 50=0.35% 00:21:50.599 cpu : usr=40.25%, sys=4.04%, ctx=1205, majf=0, minf=0 00:21:50.599 IO depths : 1=3.3%, 2=9.5%, 4=24.9%, 8=53.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:21:50.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.599 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.599 issued rwts: total=19898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.599 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83597: Fri Nov 29 13:08:22 2024 00:21:50.599 read: IOPS=2285, BW=7837KiB/s (8025kB/s)(384KiB/49msec) 00:21:50.599 slat (nsec): min=4420, max=19190, avg=8235.89, stdev=2380.00 00:21:50.599 clat (usec): min=1671, max=12861, avg=7553.96, stdev=4476.87 00:21:50.599 lat (usec): min=1680, max=12872, avg=7562.11, stdev=4477.48 00:21:50.599 clat percentiles (usec): 00:21:50.599 | 1.00th=[ 1680], 5.00th=[ 1713], 10.00th=[ 1745], 20.00th=[ 2114], 00:21:50.599 | 30.00th=[ 2147], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 9241], 00:21:50.599 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12780], 00:21:50.599 | 99.00th=[12911], 99.50th=[12911], 99.90th=[12911], 99.95th=[12911], 00:21:50.599 | 99.99th=[12911] 00:21:50.599 lat (msec) : 2=14.29%, 4=14.29%, 10=28.57%, 20=28.57% 00:21:50.599 cpu : usr=47.92%, sys=0.00%, ctx=25, majf=0, minf=0 00:21:50.599 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.8%, 4=93.5%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83598: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83599: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83600: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83601: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83602: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=6, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83603: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83604: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83605: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83606: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83607: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83608: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83609: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83610: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83611: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83612: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83613: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83614: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83615: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.600 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.600 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83616: Fri Nov 29 13:08:22 2024 00:21:50.600 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.601 filename2: (groupid=0, jobs=1): err= 0: pid=83617: Fri Nov 29 13:08:22 2024 00:21:50.601 read: IOPS=1889, BW=7557KiB/s (7738kB/s)(73.8MiB/10006msec) 00:21:50.601 slat (usec): min=5, max=8083, avg=16.81, stdev=194.86 00:21:50.601 clat (usec): min=408, max=28335, avg=8324.30, stdev=3755.39 00:21:50.601 lat (usec): min=416, max=28347, avg=8341.11, stdev=3761.41 00:21:50.601 clat percentiles (usec): 00:21:50.601 | 1.00th=[ 1647], 5.00th=[ 1827], 10.00th=[ 2638], 20.00th=[ 5080], 00:21:50.601 | 30.00th=[ 6915], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8979], 00:21:50.601 | 70.00th=[10290], 80.00th=[11863], 90.00th=[12780], 95.00th=[14353], 00:21:50.601 | 99.00th=[17433], 99.50th=[19792], 99.90th=[23987], 99.95th=[26084], 00:21:50.601 | 99.99th=[26084] 00:21:50.601 bw ( KiB/s): min= 5360, max= 9328, per=48.88%, avg=7596.95, stdev=1155.18, samples=19 00:21:50.601 iops : min= 1340, max= 2332, avg=1899.21, stdev=288.83, samples=19 00:21:50.601 lat (usec) : 500=0.01%, 750=0.12%, 1000=0.15% 00:21:50.601 lat (msec) : 2=6.41%, 4=9.17%, 10=51.72%, 20=31.93%, 50=0.50% 00:21:50.601 cpu : usr=40.01%, sys=4.17%, ctx=1381, majf=0, minf=9 00:21:50.601 IO depths : 1=3.6%, 2=9.7%, 4=24.8%, 8=53.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:21:50.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 issued rwts: total=18904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.601 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83618: Fri Nov 29 13:08:22 2024 00:21:50.601 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.601 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=83619: Fri Nov 29 13:08:22 2024 00:21:50.601 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:21:50.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:50.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.601 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:50.601 00:21:50.601 Run status group 0 (all jobs): 00:21:50.601 READ: bw=15.2MiB/s (15.9MB/s), 7557KiB/s-7950KiB/s (7738kB/s-8140kB/s), io=152MiB (159MB), run=49-10012msec 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # trap - ERR 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # print_backtrace 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1159 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1159 -- # local args 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1161 -- # xtrace_disable 00:21:51.169 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:51.169 ========== Backtrace start: ========== 00:21:51.169 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1356 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:21:51.170 ... 00:21:51.170 1351 break 00:21:51.170 1352 fi 00:21:51.170 1353 done 00:21:51.170 1354 00:21:51.170 1355 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:21:51.170 1356 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:21:51.170 1357 } 00:21:51.170 1358 00:21:51.170 1359 function fio_bdev() { 00:21:51.170 1360 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:21:51.170 1361 } 00:21:51.170 ... 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1360 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:21:51.170 ... 00:21:51.170 1355 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:21:51.170 1356 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:21:51.170 1357 } 00:21:51.170 1358 00:21:51.170 1359 function fio_bdev() { 00:21:51.170 1360 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:21:51.170 1361 } 00:21:51.170 1362 00:21:51.170 1363 function fio_nvme() { 00:21:51.170 1364 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:21:51.170 1365 } 00:21:51.170 ... 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:21:51.170 ... 00:21:51.170 77 FIO 00:21:51.170 78 done 00:21:51.170 79 } 00:21:51.170 80 00:21:51.170 81 fio() { 00:21:51.170 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:21:51.170 83 } 00:21:51.170 84 00:21:51.170 85 fio_dif_1() { 00:21:51.170 86 create_subsystems 0 00:21:51.170 87 fio <(create_json_sub_conf 0) 00:21:51.170 ... 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:21:51.170 ... 00:21:51.170 107 destroy_subsystems 0 00:21:51.170 108 00:21:51.170 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:21:51.170 110 00:21:51.170 111 create_subsystems 0 1 2 00:21:51.170 => 112 fio <(create_json_sub_conf 0 1 2) 00:21:51.170 113 destroy_subsystems 0 1 2 00:21:51.170 114 00:21:51.170 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:21:51.170 116 00:21:51.170 117 create_subsystems 0 1 00:21:51.170 ... 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1129 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:21:51.170 ... 00:21:51.170 1124 timing_enter $test_name 00:21:51.170 1125 echo "************************************" 00:21:51.170 1126 echo "START TEST $test_name" 00:21:51.170 1127 echo "************************************" 00:21:51.170 1128 xtrace_restore 00:21:51.170 1129 time "$@" 00:21:51.170 1130 xtrace_disable 00:21:51.170 1131 echo "************************************" 00:21:51.170 1132 echo "END TEST $test_name" 00:21:51.170 1133 echo "************************************" 00:21:51.170 1134 timing_exit $test_name 00:21:51.170 ... 00:21:51.170 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:21:51.170 ... 00:21:51.170 138 00:21:51.170 139 create_transport 00:21:51.170 140 00:21:51.170 141 run_test "fio_dif_1_default" fio_dif_1 00:21:51.170 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:21:51.170 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:21:51.170 144 run_test "fio_dif_digest" fio_dif_digest 00:21:51.170 145 00:21:51.170 146 trap - SIGINT SIGTERM EXIT 00:21:51.170 147 nvmftestfini 00:21:51.170 ... 00:21:51.170 00:21:51.170 ========== Backtrace end ========== 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1198 -- # return 0 00:21:51.170 00:21:51.170 real 0m21.416s 00:21:51.170 user 2m25.511s 00:21:51.170 sys 0m2.547s 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # process_shm --id 0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@812 -- # type=--id 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@813 -- # id=0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:51.170 nvmf_trace.0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@827 -- # return 0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # nvmftestfini 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@121 -- # sync 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@124 -- # set +e 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.170 rmmod nvme_tcp 00:21:51.170 rmmod nvme_fabrics 00:21:51.170 rmmod nvme_keyring 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@128 -- # set -e 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@129 -- # return 0 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@517 -- # '[' -n 83122 ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@518 -- # killprocess 83122 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@954 -- # '[' -z 83122 ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@958 -- # kill -0 83122 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@959 -- # uname 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83122 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.170 killing process with pid 83122 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83122' 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@973 -- # kill 83122 00:21:51.170 13:08:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@978 -- # wait 83122 00:21:51.428 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:51.428 13:08:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.687 Waiting for block devices as requested 00:21:51.945 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.945 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@297 -- # iptr 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-save 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:51.945 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.203 13:08:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@300 -- # return 0 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1129 -- # trap - ERR 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1129 -- # print_backtrace 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1159 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1159 -- # local args 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1161 -- # xtrace_disable 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:52.203 ========== Backtrace start: ========== 00:21:52.203 00:21:52.203 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:21:52.203 ... 00:21:52.203 1124 timing_enter $test_name 00:21:52.203 1125 echo "************************************" 00:21:52.203 1126 echo "START TEST $test_name" 00:21:52.203 1127 echo "************************************" 00:21:52.203 1128 xtrace_restore 00:21:52.203 1129 time "$@" 00:21:52.203 1130 xtrace_disable 00:21:52.203 1131 echo "************************************" 00:21:52.203 1132 echo "END TEST $test_name" 00:21:52.203 1133 echo "************************************" 00:21:52.203 1134 timing_exit $test_name 00:21:52.203 ... 00:21:52.203 in /home/vagrant/spdk_repo/spdk/autotest.sh:289 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:21:52.203 ... 00:21:52.203 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:21:52.203 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:21:52.203 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:21:52.203 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:21:52.203 288 fi 00:21:52.203 => 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:21:52.203 290 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:21:52.203 291 # The keyring tests utilize NVMe/TLS 00:21:52.203 292 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:21:52.203 293 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:21:52.203 294 run_test "keyring_linux" "$rootdir/scripts/keyctl-session-wrapper" \ 00:21:52.203 ... 00:21:52.203 00:21:52.203 ========== Backtrace end ========== 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1198 -- # return 0 00:21:52.203 00:21:52.203 real 0m46.449s 00:21:52.203 user 3m26.020s 00:21:52.203 sys 0m11.199s 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1 -- # autotest_cleanup 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1396 -- # local autotest_es=22 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:52.203 13:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:04.410 INFO: APP EXITING 00:22:04.410 INFO: killing all VMs 00:22:04.410 INFO: killing vhost app 00:22:04.410 INFO: EXIT DONE 00:22:04.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.929 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:04.929 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:05.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.496 Cleaning 00:22:05.496 Removing: /var/run/dpdk/spdk0/config 00:22:05.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:05.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:05.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:05.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:05.496 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:05.496 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:05.496 Removing: /var/run/dpdk/spdk1/config 00:22:05.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:05.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:05.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:05.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:05.496 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:05.496 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:05.496 Removing: /var/run/dpdk/spdk2/config 00:22:05.496 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:05.496 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:05.496 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:05.496 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:05.496 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:05.496 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:05.496 Removing: /var/run/dpdk/spdk3/config 00:22:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:05.756 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:05.756 Removing: /var/run/dpdk/spdk4/config 00:22:05.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:05.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:05.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:05.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:05.756 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:05.756 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:05.756 Removing: /dev/shm/nvmf_trace.0 00:22:05.756 Removing: /dev/shm/spdk_tgt_trace.pid56714 00:22:05.756 Removing: /var/run/dpdk/spdk0 00:22:05.756 Removing: /var/run/dpdk/spdk1 00:22:05.756 Removing: /var/run/dpdk/spdk2 00:22:05.756 Removing: /var/run/dpdk/spdk3 00:22:05.756 Removing: /var/run/dpdk/spdk4 00:22:05.756 Removing: /var/run/dpdk/spdk_pid56550 00:22:05.756 Removing: /var/run/dpdk/spdk_pid56714 00:22:05.756 Removing: /var/run/dpdk/spdk_pid56912 00:22:05.756 Removing: /var/run/dpdk/spdk_pid56999 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57032 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57141 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57152 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57291 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57487 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57641 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57719 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57790 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57887 00:22:05.756 Removing: /var/run/dpdk/spdk_pid57966 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58005 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58035 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58105 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58204 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58643 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58689 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58740 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58756 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58829 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58837 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58910 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58918 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58969 00:22:05.756 Removing: /var/run/dpdk/spdk_pid58987 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59033 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59043 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59179 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59215 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59292 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59637 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59649 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59685 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59699 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59720 00:22:05.756 Removing: /var/run/dpdk/spdk_pid59739 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59751 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59768 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59787 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59806 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59816 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59835 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59854 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59875 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59894 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59904 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59925 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59944 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59963 00:22:05.757 Removing: /var/run/dpdk/spdk_pid59973 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60009 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60028 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60052 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60124 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60159 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60163 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60197 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60212 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60214 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60262 00:22:05.757 Removing: /var/run/dpdk/spdk_pid60282 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60311 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60321 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60331 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60340 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60355 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60359 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60374 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60384 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60412 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60439 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60448 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60482 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60492 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60499 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60540 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60551 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60578 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60591 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60597 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60606 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60613 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60623 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60630 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60638 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60720 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60773 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60884 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60919 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60965 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60985 00:22:06.017 Removing: /var/run/dpdk/spdk_pid60996 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61016 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61054 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61070 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61148 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61169 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61213 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61291 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61363 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61392 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61497 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61540 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61578 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61810 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61913 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61936 00:22:06.017 Removing: /var/run/dpdk/spdk_pid61971 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62003 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62038 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62077 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62114 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62512 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62551 00:22:06.017 Removing: /var/run/dpdk/spdk_pid62899 00:22:06.017 Removing: /var/run/dpdk/spdk_pid63373 00:22:06.017 Removing: /var/run/dpdk/spdk_pid63647 00:22:06.017 Removing: /var/run/dpdk/spdk_pid64501 00:22:06.017 Removing: /var/run/dpdk/spdk_pid65423 00:22:06.017 Removing: /var/run/dpdk/spdk_pid65540 00:22:06.017 Removing: /var/run/dpdk/spdk_pid65608 00:22:06.017 Removing: /var/run/dpdk/spdk_pid67026 00:22:06.017 Removing: /var/run/dpdk/spdk_pid67333 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71092 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71458 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71567 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71703 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71737 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71758 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71779 00:22:06.017 Removing: /var/run/dpdk/spdk_pid71859 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72000 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72155 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72234 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72430 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72499 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72584 00:22:06.017 Removing: /var/run/dpdk/spdk_pid72939 00:22:06.017 Removing: /var/run/dpdk/spdk_pid73349 00:22:06.017 Removing: /var/run/dpdk/spdk_pid73350 00:22:06.017 Removing: /var/run/dpdk/spdk_pid73351 00:22:06.017 Removing: /var/run/dpdk/spdk_pid73619 00:22:06.017 Removing: /var/run/dpdk/spdk_pid73886 00:22:06.017 Removing: /var/run/dpdk/spdk_pid74274 00:22:06.017 Removing: /var/run/dpdk/spdk_pid74282 00:22:06.276 Removing: /var/run/dpdk/spdk_pid74608 00:22:06.276 Removing: /var/run/dpdk/spdk_pid74622 00:22:06.276 Removing: /var/run/dpdk/spdk_pid74636 00:22:06.276 Removing: /var/run/dpdk/spdk_pid74671 00:22:06.276 Removing: /var/run/dpdk/spdk_pid74677 00:22:06.276 Removing: /var/run/dpdk/spdk_pid75029 00:22:06.276 Removing: /var/run/dpdk/spdk_pid75074 00:22:06.277 Removing: /var/run/dpdk/spdk_pid75419 00:22:06.277 Removing: /var/run/dpdk/spdk_pid75618 00:22:06.277 Removing: /var/run/dpdk/spdk_pid76067 00:22:06.277 Removing: /var/run/dpdk/spdk_pid76611 00:22:06.277 Removing: /var/run/dpdk/spdk_pid77513 00:22:06.277 Removing: /var/run/dpdk/spdk_pid78146 00:22:06.277 Removing: /var/run/dpdk/spdk_pid78149 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80194 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80254 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80307 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80361 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80474 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80528 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80575 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80628 00:22:06.277 Removing: /var/run/dpdk/spdk_pid80983 00:22:06.277 Removing: /var/run/dpdk/spdk_pid82205 00:22:06.277 Removing: /var/run/dpdk/spdk_pid82338 00:22:06.277 Removing: /var/run/dpdk/spdk_pid82581 00:22:06.277 Removing: /var/run/dpdk/spdk_pid83172 00:22:06.277 Removing: /var/run/dpdk/spdk_pid83332 00:22:06.277 Removing: /var/run/dpdk/spdk_pid83494 00:22:06.277 Removing: /var/run/dpdk/spdk_pid83591 00:22:06.277 Clean 00:22:06.844 13:08:38 nvmf_dif -- common/autotest_common.sh@1453 -- # return 22 00:22:06.845 13:08:38 nvmf_dif -- common/autotest_common.sh@1 -- # : 00:22:06.845 13:08:38 nvmf_dif -- common/autotest_common.sh@1 -- # exit 1 00:22:06.845 13:08:38 -- spdk/autorun.sh@27 -- $ trap - ERR 00:22:06.845 13:08:38 -- spdk/autorun.sh@27 -- $ print_backtrace 00:22:06.845 13:08:38 -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]] 00:22:06.845 13:08:38 -- common/autotest_common.sh@1159 -- $ args=('/home/vagrant/spdk_repo/autorun-spdk.conf') 00:22:06.845 13:08:38 -- common/autotest_common.sh@1159 -- $ local args 00:22:06.845 13:08:38 -- common/autotest_common.sh@1161 -- $ xtrace_disable 00:22:06.845 13:08:38 -- common/autotest_common.sh@10 -- $ set +x 00:22:06.845 ========== Backtrace start: ========== 00:22:06.845 00:22:06.845 in spdk/autorun.sh:27 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:22:06.845 ... 00:22:06.845 22 trap 'timing_finish || exit 1' EXIT 00:22:06.845 23 00:22:06.845 24 # Runs agent scripts 00:22:06.845 25 $rootdir/autobuild.sh "$conf" 00:22:06.845 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:22:06.845 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:22:06.845 28 fi 00:22:06.845 ... 00:22:06.845 00:22:06.845 ========== Backtrace end ========== 00:22:06.845 13:08:38 -- common/autotest_common.sh@1198 -- $ return 0 00:22:06.845 13:08:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:06.845 13:08:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:06.845 13:08:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:06.845 13:08:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:06.845 13:08:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:06.856 [Pipeline] } 00:22:06.874 [Pipeline] // timeout 00:22:06.881 [Pipeline] } 00:22:06.897 [Pipeline] // stage 00:22:06.905 [Pipeline] } 00:22:06.910 ERROR: script returned exit code 1 00:22:06.910 Setting overall build result to FAILURE 00:22:06.924 [Pipeline] // catchError 00:22:06.934 [Pipeline] stage 00:22:06.936 [Pipeline] { (Stop VM) 00:22:06.949 [Pipeline] sh 00:22:07.280 + vagrant halt 00:22:10.571 ==> default: Halting domain... 00:22:15.866 [Pipeline] sh 00:22:16.149 + vagrant destroy -f 00:22:19.437 ==> default: Removing domain... 00:22:19.449 [Pipeline] sh 00:22:19.739 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:19.747 [Pipeline] } 00:22:19.757 [Pipeline] // stage 00:22:19.760 [Pipeline] } 00:22:19.768 [Pipeline] // dir 00:22:19.772 [Pipeline] } 00:22:19.780 [Pipeline] // wrap 00:22:19.784 [Pipeline] } 00:22:19.794 [Pipeline] // catchError 00:22:19.801 [Pipeline] stage 00:22:19.803 [Pipeline] { (Epilogue) 00:22:19.812 [Pipeline] sh 00:22:20.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:22.013 [Pipeline] catchError 00:22:22.016 [Pipeline] { 00:22:22.030 [Pipeline] sh 00:22:22.313 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:22.314 Artifacts sizes are good 00:22:22.323 [Pipeline] } 00:22:22.337 [Pipeline] // catchError 00:22:22.349 [Pipeline] archiveArtifacts 00:22:22.357 Archiving artifacts 00:22:22.545 [Pipeline] cleanWs 00:22:22.556 [WS-CLEANUP] Deleting project workspace... 00:22:22.556 [WS-CLEANUP] Deferred wipeout is used... 00:22:22.563 [WS-CLEANUP] done 00:22:22.565 [Pipeline] } 00:22:22.580 [Pipeline] // stage 00:22:22.585 [Pipeline] } 00:22:22.600 [Pipeline] // node 00:22:22.605 [Pipeline] End of Pipeline 00:22:22.638 Finished: FAILURE